In the way towards deprecating gdk_display_notify_startup_complete(),
make gdk_toplevel_set_startup_id() on X11 perform this piece of messaging
itself. It should be harmless that the message is emitted twice, if
callers do still use that API.
Currently, the GdkSurfaceX11 implementation relies that the upper
layers hid the surface before destruction, and that no
GdkSurfaceClass.compute_resize happened between them. If these
circumstances happened, there would be a compute_size timeout left
dangling after the surface got destroyed, poking at incorrect data
later on. Something that looks like this was reported in the
recent mutter-x11-frames "SSD frames server":
mutter-x11-frames:423016): GLib-GObject-WARNING **: 19:41:16.869: invalid unclassed pointer in cast to 'GtkWindow'
Thread 1 "mutter-x11-fram" received signal SIGTRAP, Trace/breakpoint trap.
g_logv (log_domain=0x7ffff7f7c4f8 "GLib-GObject", log_level=G_LOG_LEVEL_WARNING, format=<optimized out>, args=<optimized out>) at ../../../glib/gmessages.c:1433
1433 ../../../glib/gmessages.c: No such file or directory.
(gdb) bt
#0 g_logv (log_domain=0x7ffff7f7c4f8 "GLib-GObject", log_level=G_LOG_LEVEL_WARNING, format=<optimized out>, args=<optimized out>) at ../../../glib/gmessages.c:1433
#1 0x00007ffff73470ff in g_log (log_domain=log_domain@entry=0x7ffff7f7c4f8 "GLib-GObject", log_level=log_level@entry=G_LOG_LEVEL_WARNING, format=format@entry=0x7ffff7f84da8 "invalid unclassed pointer in cast to '%s'")
at ../../../glib/gmessages.c:1471
#2 0x00007ffff7f72892 in g_type_check_instance_cast (type_instance=type_instance@entry=0x5555558e04b0, iface_type=<optimized out>) at ../../../gobject/gtype.c:4144
#3 0x00007ffff791e77d in toplevel_compute_size (toplevel=<optimized out>, size=0x7fffffffe170, widget=0x5555558e04b0) at ../../../gtk/gtkwindow.c:4227
#4 0x00007ffff7f4f3b0 in g_closure_invoke (closure=0x555555898cc0, return_value=return_value@entry=0x0, n_param_values=2, param_values=param_values@entry=0x7fffffffdeb0, invocation_hint=invocation_hint@entry=0x7fffffffde30)
at ../../../gobject/gclosure.c:832
#5 0x00007ffff7f62076 in signal_emit_unlocked_R
(node=node@entry=0x55555588feb0, detail=detail@entry=0, instance=instance@entry=0x55555560e990, emission_return=emission_return@entry=0x0, instance_and_params=instance_and_params@entry=0x7fffffffdeb0)
at ../../../gobject/gsignal.c:3796
#6 0x00007ffff7f68bf5 in g_signal_emit_valist (instance=<optimized out>, signal_id=<optimized out>, detail=<optimized out>, var_args=var_args@entry=0x7fffffffe050) at ../../../gobject/gsignal.c:3549
#7 0x00007ffff7f68dbf in <emit signal ??? on instance 0x55555560e990 [GdkX11Toplevel]> (instance=<optimized out>, signal_id=<optimized out>, detail=detail@entry=0) at ../../../gobject/gsignal.c:3606
#8 0x00007ffff7a8de96 in gdk_toplevel_notify_compute_size (toplevel=<optimized out>, size=size@entry=0x7fffffffe170) at ../../../gdk/gdktoplevel.c:112
#9 0x00007ffff7a4b15a in compute_toplevel_size (surface=surface@entry=0x55555560e990 [GdkX11Toplevel], update_geometry=update_geometry@entry=1, width=width@entry=0x7fffffffe220, height=height@entry=0x7fffffffe224)
at ../../../gdk/x11/gdksurface-x11.c:281
#10 0x00007ffff7a4c3b2 in compute_size_idle (user_data=0x55555560e990) at ../../../gdk/x11/gdksurface-x11.c:356
#11 0x00007ffff733f67f in g_main_dispatch (context=0x55555563f6e0) at ../../../glib/gmain.c:3444
#12 g_main_context_dispatch (context=context@entry=0x55555563f6e0) at ../../../glib/gmain.c:4162
#13 0x00007ffff733fa38 in g_main_context_iterate (context=0x55555563f6e0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../../../glib/gmain.c:4238
#14 0x00007ffff733fcef in g_main_loop_run (loop=loop@entry=0x5555560874a0) at ../../../glib/gmain.c:4438
#15 0x0000555555557de0 in main (argc=<optimized out>, argv=<optimized out>) at ../src/frames/main.c:68
It perhaps makes sense to warn in these situations, but either way
it sounds like gdk_surface_x11_finalize() could enforce the correct
behavior by ensuring there is no dangling timeouts/data. This commit
does that.
This makes GtkSettings values on X11 match what we get on
other backends.
Reporting size settings in logical pixels (i.e for scale
== 1) is useful for properly supporting mixed-DPI setups.
As X11 doesn't support mixed-DPI setups anyway, XSettings
doesn't bother providing logical values. Thus we scale
from physical to logical values ourselves.
Fixes https://gitlab.gnome.org/GNOME/gtk/-/issues/5223
Fixes https://gitlab.gnome.org/GNOME/gtk/-/issues/5230
If we get an invalid TARGETS reply, we might not have a valid 'type',
which ends up as NULL and segs in the g_str_equal.
(This is probably fallout from my fix 506566b6a4, which I still
can't reproduce reliably, so the last one just moved the seg a bit
further along, and we still don't know who is sending a bad TARGETS).
This corresponds to:
https://bugzilla.redhat.com/show_bug.cgi?id=2062143
The XDND suggested action is a relic from when the source would control
the action for a drop. With the new GtkDropTarget the target decides
the action (not the source). That means the all of the returned
results from the ::enter and ::motion handlers will be unexpectely
ignored. Prefer to use the preferred action over the x11 suggested action.
Fixes: https://gitlab.gnome.org/GNOME/gtk/-/issues/4259
gdk_x11_drop_update_actions() sets actions to
drop_x11->suggested_action when !drop_x11->xdnd_have_actions
and then sets it again to drop_x11->suggested_action if it is.
If xdnd_have_actions is true use xdnd_actions.
If we take the early return we don't unscale this at the bottom of the
function, causing wrong coordinates in HiDPI screens.
This bug also affects GTK3 (I noticed this running Firefox tests on X).
Mimic the behavior of the egl context creation by stablishing
some sane logic for the api and version used. Split the decision
of the type of context (api, legacy) and the creation of a context
of a certain version and all its properties.
Add "stylus" to the list of substrings in a device name that cause it to be recognized
as a GDK_SOURCE_PEN device (previously "wacom", "pen" and "eraser"). Some devices
just use "stylus" in their name, and are otherwise recognized as
GDK_SOURCE_TOUCHSCREEN instead.
Fixes#4394.
Not updating shadow size unconditionally would lead to shadow size not
being set on map, which would lead mutter to think that we are a Window
without extents and then become confused when we suddenly set some.
Make sure that doesn't happen by always having shadows set on map, just
like GTK3.
Fixes#4136
Those property features don't seem to be in use anywhere.
They are redundant since the docs cover the same information
and more. They also created unnecessary translation work.
Closes#4904
When given an invalid atom, gdk_x11_get_xatom_name_for_display can
return NULL and trigger a seg in gdk_x11_clipboard_formats_from_atoms.
Check for NULL.
Why I'm seeing a bad atom there is probably a separate question.
https://bugzilla.redhat.com/show_bug.cgi?id=2037786
Add a new GdkScrollUnit enum that represent the
unit of scroll deltas provided by GdkScrollEvent.
The unit is accessible through
gdk_scroll_event_get_unit().
When destroying the EGLSurface or GLXDrawable of a GdkSurface, make sure
the current context is not still bound to it.
If it is, clear the current context.
Fixes#4554
It makes sense to connect the begin/update/end events
for touchpad swipes and pinches in a sequence. This
commit adds the plumbing for it, but not backends
are setting sequences yet.
We finish the write to the output stream long after the stream has been
closed, so we want to keep the event handler around to do just that.
Instead, remove the handler on finalize.
The OutputStream needs to write a 0 byte end of stream Property. We need
to track if that has been written, and we do that with that new
property.
We also use that property to always request flushes when the stream is
being closed, so that we don't wait for another flush() call.
We need to be very careful when writing data, because if we aren't, sync
functions will be called on the output stream and X11 does not like that
at all.
Instead of using GL_BACK, use GL_BACK_LEFT, because the spec demands
this (many drivers don't).
Also move the call from the GDK backends into the GLContext code, as
this is a generic EGL issue (nvidia being the main driver in need of
this call, see 9c4c4eaaa1 for a longer
discussion).
Fixes#4402
Add gdk_gl_context_is_api_allowed() for backends and make them use it.
Finally, have them return the final API as the return value (or 0 on
error).
And then use that api instead of a use_es boolean flag.
Fixes#4221
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
Unify the X11 and Wayland EGL contexts.
This is a bit ugly to implement, because I don't want to create an
interface and I can't make them inherit from the same object, because
one needs to inherit from X11GLContext and the other from
WaylandGLContext.
So we have to put the code in GdkGLContext and make sure non-EGL
contexts can't accidentally run it. This is rather easy because we can
just check for priv->egl_context != NULL.
We have a global GdkGLBackendType now, just set it.
This way, using the variable forces the backend type, and we don't need
special code handling the env vars in the backends.
It also means setting the env var will now "work" on GDK backends that
don't even support that GL backend and simualte another GDK backend
having registered that GL backend already. So you can run
GDK_DEBUG=gl-wgl gtk4-demo
on test what Wayland will do when WGL is in use.
Print the extensions one per line, and sort them
alphabetically, so it is actually possible to find
something in the list.
Also print a short description of the chosen config.
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5
It seems these are sent with `xwindow` set to the root window, so this
was failing to find a surface and get the screen from that.
I'm not sure if there's a reason not to get the screen this way
elsewhere in the function, but it seems this should be correct.
This fixes the behavior of `gdk_x11_display_get_monitors()`, which
wasn't correctly changing when monitors were added or removed. For
instance, this python code was always showing the same number of
monitors when one was turned off and on, but updates correctly with this
change applied:
```python
import gi
gi.require_version("GLib", "2.0")
gi.require_version("Gdk", "4.0")
gi.require_version("Gtk", "4.0")
from gi.repository import GLib, Gdk, Gtk
def f():
print(len(Gdk.Display.get_default().get_monitors()))
return True
GLib.timeout_add_seconds(1, f)
GLib.MainLoop().run()
```
With gtkmm, when using `Application()`, the display is initialized
before we know the application name and therefore, the program class
associated to the display is NULL.
Instead of providing a default value, we set it equal to program name
when NULL. Moreover, we give up on capitalizing the class name to keep
the code super simple. Also, not using a capitalized name is
consistent with `gdk_x11_display_open()`. If someone has a good reason
to use a capitalized name, here is how to do it.
```c
class_hint = XAllocClassHint ();
class_hint->res_name = (char *) g_get_prgname ();
if (display_x11->program_class)
{
class_hint->res_class = (char *) g_strdup (display_x11->program_class);
}
else if (class_hint->res_name && class_hint->res_name[0])
{
class_hint->res_class = (char *) g_strdup (class_hint->res_name);
class_hint->res_class[0] = g_ascii_toupper (class_hint->res_class[0]);
}
XSetClassHint (xdisplay, impl->xid, class_hint);
g_free (class_hint->res_class);
XFree (class_hint);
```
Fix eff53c023a ("x11: set a default value for program_class")
Usually the "dnd-finished" signal will be used to unref the GdkDrag. In
those cases, we would lose the object, so that when we do the final
drag_drop_done() afterwards, we wouldn't have a remaining reference.
With the reference guard, this now works.
It is good practice for (floating) window managers to respect explicit
position hints from clients (as long as the window wouldn't end up
off-screen etc.).
Before commit 13d3afa56e, GTK had a flag for setting the PPosition hint,
but now does so unconditionally. However the real intention is to *not*
request a fixed position, so don't do that.
nvidia sets the default draw buffer to GL_NONE if EGL contexts are
initially bound to EGL_NO_SURFACE which is exactly what we are doing. So
bind them to GL_BACK when drawing, as they should be.
See https://phabricator.services.mozilla.com/D118743 for a discussion
about EGL_NO_CONTEXT and draw buffers.
This has the benefit that we can refactor it and make sure we deal with
GdkDisplay::init_gl() not being called at all because
GDK_DEBUG=gl-disable had been specified.
It's not used there, but both backends have independent
immplementationgs for it.
I want to get rid of GdkGLContextX11 and moving code from it is the
first step.
Now that we have the display's context to hook into, we can use it to
construct other GL contexts and don't need a GdkSurface vfunc anymore.
This has the added benefit that backends can have different GdkGLContext
classes on the display and get new GLContexts generated from them, so
we get multiple GL backend support per GDK backend for free.
I originally wanted to make this a vfunc on GdkGLContextClass, but
it turns out all the abckends would just call g_object_new() anyway.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
The code to create and manage a fake egl surface to bind to is
complex and completely untested because everyone seems to support this
extension.
nvidia and Mesa do support it and according to Mesa devs, adding support
in a new driver is rather simple and Mesa drivers gain that feature
automatically, so all future drivers shoould have it.
... or more exactly: Only use paint contexts with
gdk_cairo_draw_from_gl().
Instead of paint contexts being the only contexts who call swapBuffer(),
any context can be used for this, when it's used with
begin_frame()/end_frame().
This removes 2 features:
1. We no longer need a big sharing hierarchy. All contexts are now
shared with gdk_display_get_gl_context().
2. There is no longer a difference between attached and non-attached
contexts. All contexts work the same way.
The vfunc is called to initialize GL and it returns a "base" context
that GDK then uses as the context all others are shared with. So the GL
context share tree now looks like:
+ context from init_gl
- context1
- context2
...
So this is a flat tree now, the complexity is gone.
The only caveat is that backends now need to create a GL context when
initializing GL so some refactoring was needed.
Two new functions have been added:
* gdk_display_prepare_gl()
This is public API and can be used to ensure that GL has been
initialized or if not, retrieve an error to display (or debug-print).
* gdk_display_get_gl_context()
This is a private function to retrieve the base context from
init_gl(). It replaces gdk_surface_get_shared_data_context().
We try EGL first, but are very picky about what we accept.
If that fails, we try to go with GLX instead.
And if that also fails, we try EGL again, but this time accept anything.
The idea here is that EGL is the preferred method going forward, but GLX is
the tried and tested method that we know works. So if we detect issues with
EGL, we want to avoid using it in favor of GLX.
Also add a GDK_DEBUG=gl-egl option to force EGL at all costs and not try
GLX.
That way, we can give a useful error message when things break down for
users.
These error messages could still be improved in places (like looking at
the actual EGL error codes), but that seemed overkill.
Query the EGL_VISUAL_ID from the egl Config and select a config with the
matching Visual.
This is currently broken on Mesa because it does not expose any RGBA
X Visuals in any EGL config, so we always end up with opaque Windows.
https://gitlab.freedesktop.org/mesa/mesa/-/issues/149
This reverts commit c35a6725b9.
This approach doesn't work because if NVIDIA doesn't work for EGL, the
EGL implementation won't be provided by NVIDIA, so checking the vendor
doesn't work.
Instead, use the display's "leader surface" when no surface is required,
because we have it lying around.
Really, we want to use EGL_NO_SURFACE, but if that's not supported...
Instead of going via GdkVisual, doing a preselection and letting the GL
initialization improve it, let the GL initialization pick an X Visual
directly using X Visual code directly.
The code should select the same visuals as before as it tries to apply
the same logic, but it's a rewrite, so I expect I messed something up.
1. We're using EGL most of the time anyway, so if we wanted to cache
things, we'd need to port it there.
2. Our GL handling is massively configurable, so determining when to use
the cache and when not is a challenge.
3. It makes startup nondeterministic and depend on whether a GTK4 app
has previously been started on this display and nobody thinks about
that when debugging.
4. The only benefit of the caching is delaying GL initialization - which
made sense in GTK3 where almost no app used GL but doesn't make sense
in GTK4 where almost every app uses GL.
So unless I find a big benefit to reintroducing it, this cache will be
gone for good.
Avoids having to use private data, though the benefit is somewhat
limited as we still have to put the destructor in the egl code and can't
just put it in gdk_surface_x11_finalize().
We only have one config, because we use the same Visual everywhere.
Store this config in the GdkDisplayX11 struct for easy access.
Also do this on initialize, because if creating the config fails, we
want to switch to GLX instead of failing to do GL at all.
This also simplifies a lot of code as we can share Visual, Colormap, etc
across surfaces.
There's no need to use g_object_set_data() for it.
We can also stop caching it elsewhere because we know the display has
it.
And finally, we can remove the display->have_egl boolean and use
display->egl_display != NULL instead. We initialize the display at
startup, so that variable is the perfect indicator.
We need to initialize GL to select the Visual we are going to use for
all our Windows.
As the Visual needs to be known before we know if we are even gonna use
GL later, we can't avoid initializing it.
Note that this previously happened, too. It was just hidden behind the
GdkScreen initialization.