Otherwise the outer loop control variable is messed up, and we end
up with uninitialized axes if there were any more valuators after
the XIKeyClass one.
This bug was sneakily introduced by fdb9a8e14, many thanks to
Carlos Soriano for helping spot the source of this bug.
https://bugzilla.gnome.org/show_bug.cgi?id=753431
When dealing with selection events, we might see windows from
other screens in the requestor field. The current x11 backend
code fails to wrap these in a foreign GdkWindow, since we
don't have the corresponding GdkScreen anymore. Work around
this by creating such 'foreign screens' on demand. We still
maintain the 1:1 relation between the display and the screen
returned by gdk_display_get_default_screen().
https://bugzilla.gnome.org/show_bug.cgi?id=721398
When using frame times from _NET_WM_FRAME_DRAWN and _NET_WM_FRAME_TIMINGS, we
were treating them as local monotonic times, but they are actually extended-precision
versions of the server time, and need to be translated to monotonic times in the
case where the X server and client aren't running on the same system.
This fixes rendering stalls when using X over a remote ssh connection.
https://bugzilla.gnome.org/show_bug.cgi?id=741800
Avoid using a stale timestamp (from the last user interaction with the
application) when a message arrives from D-Bus requesting that a new
window be created.
In this case the most-correct thing that we can do is to use no
timestamp at all.
We modify gdk_x11_display_set_startup_notification_id() to allow a NULL
value to mean "reset everything" and then call this function
unconditionally on receipt of D-Bus activation requests. The result
will be that a missing desktop-startup-id in the platform-data struct
will reset the timestamp.
Under their default configuration metacity and mutter will both map
windows presented with no timestamp in the foreground. This could
result in false-positive, but there is very little we can do about that
without the original timestamp from the user event.
https://bugzilla.gnome.org/show_bug.cgi?id=752000
If we don't find Xft values in the X resource db, simply fall
back to the values that are hardcoded in /etc/X11/Xresources
anyway. Extra trickery with likely-made-up screen dimensions
is not going to yield better results, and only makes for a
deeper rabbit hole when debugging.
We used to "invalidate" scroll valuators, so the next scroll event could
be used as the base for the next scroll deltas. This has the inconvenience
that it invariably consumes the first event received after enter and,
due to interactions with WM overeager passive button grabs, there's a
possibility we don't scroll at all if we receive interleaved "smooth
scroll" XI_Motion events and XI_Enter events (Normally triggered by regular
scroll wheels in mice).
In order to fix this, and at the expense of some sync-call overhead on
XI_Enter events (one XIQueryDevice call per slave device), query the
current scroll valuator state for all the slaves of the entered pointer,
so we do know beforehand the right base values. If new devices are plugged
while the pointer is on top of the client, the initialized scroll values
will match the valuators'.
https://bugzilla.gnome.org/show_bug.cgi?id=750994https://bugzilla.gnome.org/show_bug.cgi?id=750870
gdk_x11_device_xi2_window_at_position() may attempt to push/pop
a few error trap pairs while traversing the window tree. Move it
outside the server grab, and around the multiple XIQueryPointer
calls we may do here.
https://bugzilla.gnome.org/show_bug.cgi?id=751739
This patch introduces support for using the newly introduced
monitor objects in the XRandR protocol. These objects are meant
to be used to denote a set of rectangles representing a logical
monitor, and are used to hide details like monitor tiling and
virtual gpu outputs.
This uses the new objects instead of crtc/outputs objects when
they are available to create the monitor lists. X server 1.18
is required on the server side for randr 1.5.
https://bugzilla.gnome.org/show_bug.cgi?id=749561
We interpret buttons 4-7 as old-school scroll events, so it does
not make sense to add these to the mask. Also fix an off-by-one
in the loop here, buttons_mask is 1-based.
GdkKeymap already has support for _get_num_lock_state() and
_get_caps_lock_state(). Adding _get_scroll_lock_state() would be good
for completness and some backends (Windows?) could take advantage of
this.
XSetWindowBackgroundPixmap() will throw BadMatch only in the case of a
different parent window depth. Different visuals are fine and actually
expected in Gtk+ 3.16 (since we don't stick to the system default visual
but try to pick a better one).
https://bugzilla.gnome.org/show_bug.cgi?id=747524
If the GLX_EXT_texture_from_pixmap extension is not available when we
did the extensions check, then there's no point in using the backend
specific code paths that rely on it.
When configuring Gtk+ with --disable-xkb, the build fails because of an
undefined reference to get_xkb().
This patch fixes this issue.
Signed-off-by: Eric Le Bihan <eric.le.bihan.dev@free.fr>
https://bugzilla.gnome.org/show_bug.cgi?id=739070
And use these for the missing axes if the valuator mask is incomplete.
This used to work fine on tablets because the Wacom driver ensures all
valuators are sent, which is not true if using the evdev driver.
https://bugzilla.gnome.org/show_bug.cgi?id=703610
The existence of OpenGL implementations that do not provide the full
core profile compatibility because of reasons beyond the technical, like
llvmpipe not implementing floating point buffers, makes the existence of
GdkGLProfile and documenting the fact that we use core profiles a bit
harder.
Since we do not have any existing profile except the default, we can
remove the GdkGLProfile and its related API from GDK and GTK+, and sweep
the whole thing under the carpet, while we wait for an extension that
lets us ask for the most compatible profile possible.
https://bugzilla.gnome.org/show_bug.cgi?id=744407
Now that we have a two-stages GL context creation sequence, we can move
the profile to a pre-realize option, like the debug and forward
compatibility bits, or the GL version to use.
We simply don't want to care about legacy OpenGL.
All supported platforms also have support for OpenGL ≥ 3.2; it would
complicate the internal code; and would force us to use legacy GL
contexts internally if the first context created by the user is a legacy
GL context, and disable creation of core-3.2 contexts after that.
We will need to fix all our code examples to use the Core 3.2 profile.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
One of the major requests by OpenGL users has been the ability to
specify settings when creating a GL context, like the version to use
or whether the debug support should be enabled.
We have a couple of requirements in terms of API:
• avoid, if at all possible, the "C arrays of integers with
attribute, value pairs", which are hard to write and hard
to bind in non-C languages.
• allow failing in a recoverable way.
• do not make the GL context creation API a mess of arguments.
Looking at prior art, it seems that a common pattern is to split the
construction phase in two:
• a first phase that creates a GL context wrapper object and
does preliminary checks on the environment.
• a second phase that creates the backend-specific GL object.
We adopted a similar pattern:
• gdk_window_create_gl_context() creates a GdkGLContext
• gdk_gl_context_realize() creates the underlying resources
Calling gdk_gl_context_make_current() also realizes the context, so
simple GL users do not need to care. Advanced users will want to
call gdk_window_create_gl_context(), set up the optional requirements,
and then call gdk_gl_context_realize(). If either of these two steps
fails, it's possible to recover by changing the requirements, or simply
creating a new GdkGLContext instance.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
- Specifically request GL version when creating context. Just specifying core
profile bit results in the requested version defaulting to 1.0 which causes
the core profile bit to be ignored and an arbitrary compatability context to be
returned.
- Fix GL painting by removing GL calls that have been depricated by the 3.2 core
profile.
- Additionally remove glInvalidateFramebuffer() call, it is not supported by 3.2
core.
https://bugzilla.gnome.org/show_bug.cgi?id=742953
The ICCCM says:
If the specified property is None, the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
The previous fix for this issue in 732af31424 was incomplete.
If we use GDK_GL_PROFILE_3_2_CORE we are asking for a core profile
according to the GLX_ARB_create_context_profile extension. For that,
we pass the GLX_CONTEXT_CORE_PROFILE_BIT_ARB value for the
GLX_CONTEXT_PROFILE_MASK_ARB attribute.
The specification for the extension says that:
If the requested OpenGL version is less than 3.2,
GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality
of the context is determined solely by the requested version.
Since we're asking for a core profile, we assume a GL version greater
than or equal to 3.2; thus, we don't need to specify the
GLX_CONTEXT_MAJOR_VERSION_ARB or the GLX_CONTEXT_MINOR_VERSION_ARB
attributes, and instead just rely on whatever version GLX gives us.
This seems to work around a strange issue in Mesa; if we ask for a core
profile and any version > 3.0, we get broken rendering on any shared
context we create.
We've observed hangs of mutter when it initializes GTK+, which
are caused by initializing GL, which in turn makes xwayland
call back into mutter. With this change, mutter should just
disable GL support in GDK, and things will work.