We are getting the mime data destroy notify called when we
destroy the surface in finalize. Trying to set the XSync counters
at this time is a) pointless and b) yielding an X error because
the counters have already been destroyed.
To avoid this, unhook the damage tracking before destroying
the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=760188
We are setting mime data with a destroy notify on the cairo
surface to get notified when cairo registers damage for the
surface (in that case, it clears the mime data, calling the
destroy notify). Unfortunately, the destroy notify is also
called when we remove the mime data ourselves, which was
not intentional.
Use a flag in the window impl struct to ignore the callback
when we are clearing the hook.
Instead of creating an intermediate pixbuf, just render
the window surface onto the new surface. Doing things this
way lets us avoid the cairo_surface_mark_dirty() call in
gdk_pixbuf_get_from_window(), which is not generally safe
to call on 'random' surfaces - it asserts that the surface
has no mime data attached, and the X11 backend uses mime
data for damage tracking purposes...
We destroy the widget that is wrapped around the drag window
when the object data on the drag context gets cleared. Destroying
the window before that happens leads to unpleasantries. E.g. we may
try to access the frame clock, which doesn't exist anymore, and
things go downhill from there. So, keep the window alive for
a little longer.
Always returning a left_ptr if we can't find anything better
broke firefox application-specific fallback for missing cursors.
Keep that working by only doing the fallback for the CSS cursor
names, not for things like hashes.
https://bugzilla.gnome.org/show_bug.cgi?id=760141
Showing the drag cancel animation can be done in the X11
drag context implementation now that we hold the drag
window there, and have the start coordinates.
Since we can't control if and when the application destroys
the drag widget, we take a snapshot of the window contents
and display that during the animation. This should be good
enough for all practical purposes.
Add a variant of gdk_drag_begin that takes the start position
in addition to the device. All backend implementation have been
updated to accept (and ignore) the new arguments.
Subsequent commits will make use of the data in some backends.
Commit 1266d15c4 also broke Xwayland, as it does the same trick
than VMWare pointers. Let's extend the heuristic to check for "pointer"
in the device name, what can possibly go wrong...
https://bugzilla.gnome.org/show_bug.cgi?id=757358
We currently just look for a master device with input source MOUSE.
After recent changes to the way input devices are classified, xwayland
on my system comes up with a virtual core pointer that has input
source TOUCHSCREEN. This was causing assertion failures. Be a little
more careful and accept a touchscreen as core pointer, if there is
no mouse.
VMWare seems to create mouse devices with abs axes which confuses
our detection of single-touch touchscreens. Those have though a
name we can match on ("VirtualPS/2 VMware VMMouse"), it should
be pretty safe to assume that no real touchscreens have "mouse"
in their name...
https://bugzilla.gnome.org/show_bug.cgi?id=757358
Those won't have ABS_MT_* axes, so won't be reported has having
XITouchClassInfo. Fallback on these to checking whether abs x/y axes are
available. After the Wacom checks, any remaining device with absolute axes
should be touchscreens, and GDK_SOURCE_MOUSE does indeed just make sense on
devices with relative axes.
https://bugzilla.gnome.org/show_bug.cgi?id=757358
Using a NULL GAppInfo with g_app_launch_context_get_display() will
generate a critical warning in gio.
Use the display name instead as we don't have any valid GAppInfo to pass
to g_app_launch_context_get_display().
bugzilla: https://bugzilla.gnome.org/show_bug.cgi?id=756439
If GLX has support for the GLX_ARB_create_context_profile extension,
then we use the GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB; if it does
not, we fall back to the old glXCreateNewContext() API.
We use the shared GdkGLContext to decide whether the GLX context should
use the legacy bit or not.
https://bugzilla.gnome.org/show_bug.cgi?id=756142
Otherwise the outer loop control variable is messed up, and we end
up with uninitialized axes if there were any more valuators after
the XIKeyClass one.
This bug was sneakily introduced by fdb9a8e14, many thanks to
Carlos Soriano for helping spot the source of this bug.
https://bugzilla.gnome.org/show_bug.cgi?id=753431
When dealing with selection events, we might see windows from
other screens in the requestor field. The current x11 backend
code fails to wrap these in a foreign GdkWindow, since we
don't have the corresponding GdkScreen anymore. Work around
this by creating such 'foreign screens' on demand. We still
maintain the 1:1 relation between the display and the screen
returned by gdk_display_get_default_screen().
https://bugzilla.gnome.org/show_bug.cgi?id=721398
When using frame times from _NET_WM_FRAME_DRAWN and _NET_WM_FRAME_TIMINGS, we
were treating them as local monotonic times, but they are actually extended-precision
versions of the server time, and need to be translated to monotonic times in the
case where the X server and client aren't running on the same system.
This fixes rendering stalls when using X over a remote ssh connection.
https://bugzilla.gnome.org/show_bug.cgi?id=741800
Avoid using a stale timestamp (from the last user interaction with the
application) when a message arrives from D-Bus requesting that a new
window be created.
In this case the most-correct thing that we can do is to use no
timestamp at all.
We modify gdk_x11_display_set_startup_notification_id() to allow a NULL
value to mean "reset everything" and then call this function
unconditionally on receipt of D-Bus activation requests. The result
will be that a missing desktop-startup-id in the platform-data struct
will reset the timestamp.
Under their default configuration metacity and mutter will both map
windows presented with no timestamp in the foreground. This could
result in false-positive, but there is very little we can do about that
without the original timestamp from the user event.
https://bugzilla.gnome.org/show_bug.cgi?id=752000
If we don't find Xft values in the X resource db, simply fall
back to the values that are hardcoded in /etc/X11/Xresources
anyway. Extra trickery with likely-made-up screen dimensions
is not going to yield better results, and only makes for a
deeper rabbit hole when debugging.
We used to "invalidate" scroll valuators, so the next scroll event could
be used as the base for the next scroll deltas. This has the inconvenience
that it invariably consumes the first event received after enter and,
due to interactions with WM overeager passive button grabs, there's a
possibility we don't scroll at all if we receive interleaved "smooth
scroll" XI_Motion events and XI_Enter events (Normally triggered by regular
scroll wheels in mice).
In order to fix this, and at the expense of some sync-call overhead on
XI_Enter events (one XIQueryDevice call per slave device), query the
current scroll valuator state for all the slaves of the entered pointer,
so we do know beforehand the right base values. If new devices are plugged
while the pointer is on top of the client, the initialized scroll values
will match the valuators'.
https://bugzilla.gnome.org/show_bug.cgi?id=750994https://bugzilla.gnome.org/show_bug.cgi?id=750870
gdk_x11_device_xi2_window_at_position() may attempt to push/pop
a few error trap pairs while traversing the window tree. Move it
outside the server grab, and around the multiple XIQueryPointer
calls we may do here.
https://bugzilla.gnome.org/show_bug.cgi?id=751739
This patch introduces support for using the newly introduced
monitor objects in the XRandR protocol. These objects are meant
to be used to denote a set of rectangles representing a logical
monitor, and are used to hide details like monitor tiling and
virtual gpu outputs.
This uses the new objects instead of crtc/outputs objects when
they are available to create the monitor lists. X server 1.18
is required on the server side for randr 1.5.
https://bugzilla.gnome.org/show_bug.cgi?id=749561
We interpret buttons 4-7 as old-school scroll events, so it does
not make sense to add these to the mask. Also fix an off-by-one
in the loop here, buttons_mask is 1-based.
GdkKeymap already has support for _get_num_lock_state() and
_get_caps_lock_state(). Adding _get_scroll_lock_state() would be good
for completness and some backends (Windows?) could take advantage of
this.
XSetWindowBackgroundPixmap() will throw BadMatch only in the case of a
different parent window depth. Different visuals are fine and actually
expected in Gtk+ 3.16 (since we don't stick to the system default visual
but try to pick a better one).
https://bugzilla.gnome.org/show_bug.cgi?id=747524
If the GLX_EXT_texture_from_pixmap extension is not available when we
did the extensions check, then there's no point in using the backend
specific code paths that rely on it.
When configuring Gtk+ with --disable-xkb, the build fails because of an
undefined reference to get_xkb().
This patch fixes this issue.
Signed-off-by: Eric Le Bihan <eric.le.bihan.dev@free.fr>
https://bugzilla.gnome.org/show_bug.cgi?id=739070
And use these for the missing axes if the valuator mask is incomplete.
This used to work fine on tablets because the Wacom driver ensures all
valuators are sent, which is not true if using the evdev driver.
https://bugzilla.gnome.org/show_bug.cgi?id=703610
The existence of OpenGL implementations that do not provide the full
core profile compatibility because of reasons beyond the technical, like
llvmpipe not implementing floating point buffers, makes the existence of
GdkGLProfile and documenting the fact that we use core profiles a bit
harder.
Since we do not have any existing profile except the default, we can
remove the GdkGLProfile and its related API from GDK and GTK+, and sweep
the whole thing under the carpet, while we wait for an extension that
lets us ask for the most compatible profile possible.
https://bugzilla.gnome.org/show_bug.cgi?id=744407
Now that we have a two-stages GL context creation sequence, we can move
the profile to a pre-realize option, like the debug and forward
compatibility bits, or the GL version to use.
We simply don't want to care about legacy OpenGL.
All supported platforms also have support for OpenGL ≥ 3.2; it would
complicate the internal code; and would force us to use legacy GL
contexts internally if the first context created by the user is a legacy
GL context, and disable creation of core-3.2 contexts after that.
We will need to fix all our code examples to use the Core 3.2 profile.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
One of the major requests by OpenGL users has been the ability to
specify settings when creating a GL context, like the version to use
or whether the debug support should be enabled.
We have a couple of requirements in terms of API:
• avoid, if at all possible, the "C arrays of integers with
attribute, value pairs", which are hard to write and hard
to bind in non-C languages.
• allow failing in a recoverable way.
• do not make the GL context creation API a mess of arguments.
Looking at prior art, it seems that a common pattern is to split the
construction phase in two:
• a first phase that creates a GL context wrapper object and
does preliminary checks on the environment.
• a second phase that creates the backend-specific GL object.
We adopted a similar pattern:
• gdk_window_create_gl_context() creates a GdkGLContext
• gdk_gl_context_realize() creates the underlying resources
Calling gdk_gl_context_make_current() also realizes the context, so
simple GL users do not need to care. Advanced users will want to
call gdk_window_create_gl_context(), set up the optional requirements,
and then call gdk_gl_context_realize(). If either of these two steps
fails, it's possible to recover by changing the requirements, or simply
creating a new GdkGLContext instance.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
- Specifically request GL version when creating context. Just specifying core
profile bit results in the requested version defaulting to 1.0 which causes
the core profile bit to be ignored and an arbitrary compatability context to be
returned.
- Fix GL painting by removing GL calls that have been depricated by the 3.2 core
profile.
- Additionally remove glInvalidateFramebuffer() call, it is not supported by 3.2
core.
https://bugzilla.gnome.org/show_bug.cgi?id=742953
The ICCCM says:
If the specified property is None, the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
The previous fix for this issue in 732af31424 was incomplete.
If we use GDK_GL_PROFILE_3_2_CORE we are asking for a core profile
according to the GLX_ARB_create_context_profile extension. For that,
we pass the GLX_CONTEXT_CORE_PROFILE_BIT_ARB value for the
GLX_CONTEXT_PROFILE_MASK_ARB attribute.
The specification for the extension says that:
If the requested OpenGL version is less than 3.2,
GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality
of the context is determined solely by the requested version.
Since we're asking for a core profile, we assume a GL version greater
than or equal to 3.2; thus, we don't need to specify the
GLX_CONTEXT_MAJOR_VERSION_ARB or the GLX_CONTEXT_MINOR_VERSION_ARB
attributes, and instead just rely on whatever version GLX gives us.
This seems to work around a strange issue in Mesa; if we ask for a core
profile and any version > 3.0, we get broken rendering on any shared
context we create.
We've observed hangs of mutter when it initializes GTK+, which
are caused by initializing GL, which in turn makes xwayland
call back into mutter. With this change, mutter should just
disable GL support in GDK, and things will work.
The ICCCM says:
If the specified property is None , the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
This is required for the X backend GL integration. If the
window has a height that is not a multiple of the window scale
we can't properly do the y coordinate flipping that GL needs.
Other backends can ignore this and use the default implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Rather than just rounding down the position *and* the size separately
we correctly calculate a rectangle in scaled window coords that fully
covers the real window size. This really only makes a difference
when the window size/position isn't a multiple of the window scale.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Keep track of the exact size of X windows in underlying pixels; we
generally use the scaled size instead, but to properly handle the GL
viewport for windows that aren't a multiple of window_scale,
we need to know the real size.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Although we specify a resize increment to try and get a size that is
a multiple of the window scale, maximization typically wins
over the resize increment, so the window might be odd sized.
Round *up* in this case, rather than down, since it's better to
truncate a line or two at the bottom and right of the window rather
than have a line or two that we don't know what to do with.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
If buffer age is undefined and the updated area is not the whole
window then we use bit-blits instead of swap-buffers to end the
frame.
This allows us to not repaint the entire window unnecessarily if
buffer_age is not supported, like e.g. with DRI2.