This commit adds gdk_display_get_setting and a ::setting-changed
signal, which will replace the settings event we use now. Note
that I've done away with the GdkSettingAction argument that the
event has, since we are not using it at all.
This was by all lights broken, and is basically an implementation detail
of the X11 backend since the pointer emulating touch just steals the pointer
cursor, so should be reimplemented there.
One used to point to the toplevel and the other to the client-side window
that the pointer pointed to. The latter was made to be like the former in
most places, so put those together, and fix the remaining cases where the
variable might not end up with a toplevel/native window.
This is not necessary now that there's no client-side windows to track.
The only removed piece that could make sense is emission of grab broken
events, but it's already an stretch since the semantics of those with
multi-touchpoint is unclear.
Anyhow, This should be fixed at the GTK level, while we let GDK deal with
seat/device level grabs.
Those should be interpreted by widget-local gestures, not guessed at a
high level with no notions of the specific context. Users will want
GtkGestureMultiPress to replace these events.
gdk_window_create_vulkan_context() now exists and will return a Vulkan
context for the given window. It even initializes the surface. But it
doesn't do anything useful yet.
Adds the gdk_display_ref_vulkan() and gdk_display_unref_vulkan()
functions which setup/tear down VUlkan support for the display.
Nothing is using those functions yet.
Switch code to use gdk_display_is_composited() instead.
The new code also doesn't use a vfunc to query the property but rather
requires the backend to call set_composited()/set_rgba() to change the
value.
The update tracking code was ugly and using deprecated drawing APIs. It
was also in the wrong place.
So instead of trying to keep it working, I'll remove it. We need to find
a better way to put it and make it work there.
Some backends (namely Wayland) do not support global coordinates so
using the window position to determine the monitor will always fail on
such backends.
In such cases, the backend itself might be better suited to identify
the monitor a given window resides on.
Add a vfunc get_monitor_at_window() to the display class so that we can
use the backend to retrieve the monitor, if the backend implements it.
https://bugzilla.gnome.org/show_bug.cgi?id=766566
gdk_display_list_devices is deprecated and all the backends
implement the same fallback by delegating to the device manager
and caching the list (caching it is needed since the method does
not transfer ownership of the container).
The compat code can be shared among all backends and we can
initialize the list lazily only in the case someone calls the
deprecated method.
https://bugzilla.gnome.org/show_bug.cgi?id=762891
Remember the last source device we're generating multiple clicks for,
just so we can bail out if the device changed. That will just reset
the counting.
https://bugzilla.gnome.org/show_bug.cgi?id=723659
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.
Its not really reasonable to handle failures to make_current, it
basically only happens if you pass invalid arguments to it, and
thats not something we trap on similar things on the X drawing side.
If GL is not supported that should be handled by the context creation
failing, and anything going wrong after that is essentially a critical
(or an async X error).
This adds the new type GdkGLContext that wraps an OpenGL context for a
particular native window. It also adds support for the gdk paint
machinery to use OpenGL to draw everything. As soon as anyone creates
a GL context for a native window we create a "paint context" for that
GdkWindow and switch to using GL for painting it.
This commit contains only an implementation for X11 (using GLX).
The way painting works is that all client gl contexts draw into
offscreen buffers rather than directly to the back buffer, and the
way something gets onto the window is by using gdk_cairo_draw_from_gl()
to draw part of that buffer onto the draw cairo context.
As a fallback (if we're doing redirected drawing or some effect like a
cairo_push_group()) we read back the gl buffer into memory and composite
using cairo. This means that GL rendering works in all cases, including
rendering to a PDF. However, this is not particularly fast.
In the *typical* case, where we're drawing directly to the window in
the regular paint loop we hit the fast path. The fast path uses opengl
to draw the buffer to the window back buffer, either by blitting or
texturing. Then we track the region that was drawn, and when the draw
ends we paint the normal cairo surface to the window (using
texture-from-pixmap in the X11 case, or texture from cairo image
otherwise) in the regions where there is no gl painted.
There are some complexities wrt layering of gl and cairo areas though:
* We track via gdk_window_mark_paint_from_clip() whenever gtk is
painting over a region we previously rendered with opengl
(flushed_region). This area (needs_blend_region) is blended
rather than copied at the end of the frame.
* If we're drawing a gl texture with alpha we first copy the current
cairo_surface inside the target region to the back buffer before
we blend over it.
These two operations allow us full stacking of transparent gl and cairo
regions.
If a motion event handler (or other handler running from the flush-events
phase of the frame clock) recursed the main loop then flushing wouldn't
complete until after the recursed main loop returned, and various aspects
of the state would get out of sync.
To fix this, change flushing of the event queue to simply mark events as
ready to flush, and let normal event delivery handle the rest.
https://bugzilla.gnome.org/show_bug.cgi?id=705176
Since events can be paused independently for each window during processing,
make _gdk_display_pause_events() count how many times it is called
and only unpause when unpause_events() is called the same number of
times.
https://bugzilla.gnome.org/show_bug.cgi?id=685460
When we have pending motion events, instead of delivering them
directly, request the new FLUSH_EVENTS phase of the frame clock.
This allows us to compress repeated motion events sent to the
same window.
In the FLUSH_EVENTS phase, which occur at priority GDK_PRIORITY_EVENTS + 1,
we deliver any pending motion events then turn off event delivery
until the end of the next frame. Turning off event delivery means
that we'll reliably paint the compressed motion events even if more
have arrived.
Add a motion-compression test case which demonstrates behavior when
an application takes too long handle motion events. It is unusable
without this patch but behaves fine with the patch.
https://bugzilla.gnome.org/show_bug.cgi?id=685460
Anytime a touch device interacts, the crossing events generation
will change to a touch mode where only events with mode
GDK_CROSSING_TOUCH_BEGIN/END are handled, and those are sent
around touch begin/end. Those are virtual as the master
device may still stay on the window.
Whenever there is a switch of slave device (the user starts
using another non-touch device), a crossing event with mode
GDK_CROSSING_DEVICE_SWITCH may generated if needed, and the normal
crossing event handling is resumed.
This last slave device (stored per master) is used to fill
in the missing slave device in synthesized crossing events
that are not directly caused by a device event (ie due to
configure events or grabs).
The previous function gdk_drag_get_protocol_for_display() took native
window handles, so it had to be changed. Because it didn't do what it
was named to do (it didn't return a protocol even though it was named
get_protocol) and because it doesn't operate on the display anymore but
on the actual window, it's now called gdk_window_get_drag_protocol().
... and all APIs making use of it.
That code like it hasn't been touched in years, Google codesearch
didn't find any users and most importantly it's a horrendous API, so
let's just make it die instead of having to port it over to
non-GdkNativeWindow usage, which would be required for multi-backend
GDK.
http://mail.gnome.org/archives/gtk-devel-list/2011-January/msg00049.html
There's no usecase for them, so remove them before we have to commit to
keeping an API.
Make the hooks private for now, actually removing them will come in
followup patches.
We want to have different window types for different displays, so we can
write code like this:
#if GDK_WINDOWING_X11
if (GDK_IS_X11_WINDOW (window))
{
/* do x11 stuff */
}
else
#endif
#if GDK_WINDOWING_WAYLAND
if (GDK_IS_WAYLAND_WINDOW (window))
{
/* do wayland stuff */
}
else
#endif
{
/* do stuff for unsupported system */
}
This requires different GdkWindow types and we currently don't have
that, as only the GdkWindowImpl differs. With this method, every backend
defines a custom type that's just a simple subclass of GdkWindow. This
way GdkWindow behaves like all the other types (visuals, screens,
displays) and we can write code like the above.
Move everything dealing with compound text to be X11 specific
Only gdk_text_property_to_utf8_list and gdk_utf8_to_string_target
are kept across backends, so add vfuncs for these.
Also, remove the non-multihead-safe variants of all these.