We have a global GdkGLBackendType now, just set it.
This way, using the variable forces the backend type, and we don't need
special code handling the env vars in the backends.
It also means setting the env var will now "work" on GDK backends that
don't even support that GL backend and simualte another GDK backend
having registered that GL backend already. So you can run
GDK_DEBUG=gl-wgl gtk4-demo
on test what Wayland will do when WGL is in use.
Print the extensions one per line, and sort them
alphabetically, so it is actually possible to find
something in the list.
Also print a short description of the chosen config.
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5
It seems these are sent with `xwindow` set to the root window, so this
was failing to find a surface and get the screen from that.
I'm not sure if there's a reason not to get the screen this way
elsewhere in the function, but it seems this should be correct.
This fixes the behavior of `gdk_x11_display_get_monitors()`, which
wasn't correctly changing when monitors were added or removed. For
instance, this python code was always showing the same number of
monitors when one was turned off and on, but updates correctly with this
change applied:
```python
import gi
gi.require_version("GLib", "2.0")
gi.require_version("Gdk", "4.0")
gi.require_version("Gtk", "4.0")
from gi.repository import GLib, Gdk, Gtk
def f():
print(len(Gdk.Display.get_default().get_monitors()))
return True
GLib.timeout_add_seconds(1, f)
GLib.MainLoop().run()
```
With gtkmm, when using `Application()`, the display is initialized
before we know the application name and therefore, the program class
associated to the display is NULL.
Instead of providing a default value, we set it equal to program name
when NULL. Moreover, we give up on capitalizing the class name to keep
the code super simple. Also, not using a capitalized name is
consistent with `gdk_x11_display_open()`. If someone has a good reason
to use a capitalized name, here is how to do it.
```c
class_hint = XAllocClassHint ();
class_hint->res_name = (char *) g_get_prgname ();
if (display_x11->program_class)
{
class_hint->res_class = (char *) g_strdup (display_x11->program_class);
}
else if (class_hint->res_name && class_hint->res_name[0])
{
class_hint->res_class = (char *) g_strdup (class_hint->res_name);
class_hint->res_class[0] = g_ascii_toupper (class_hint->res_class[0]);
}
XSetClassHint (xdisplay, impl->xid, class_hint);
g_free (class_hint->res_class);
XFree (class_hint);
```
Fix eff53c023a ("x11: set a default value for program_class")
Usually the "dnd-finished" signal will be used to unref the GdkDrag. In
those cases, we would lose the object, so that when we do the final
drag_drop_done() afterwards, we wouldn't have a remaining reference.
With the reference guard, this now works.
It is good practice for (floating) window managers to respect explicit
position hints from clients (as long as the window wouldn't end up
off-screen etc.).
Before commit 13d3afa56e, GTK had a flag for setting the PPosition hint,
but now does so unconditionally. However the real intention is to *not*
request a fixed position, so don't do that.
nvidia sets the default draw buffer to GL_NONE if EGL contexts are
initially bound to EGL_NO_SURFACE which is exactly what we are doing. So
bind them to GL_BACK when drawing, as they should be.
See https://phabricator.services.mozilla.com/D118743 for a discussion
about EGL_NO_CONTEXT and draw buffers.
This has the benefit that we can refactor it and make sure we deal with
GdkDisplay::init_gl() not being called at all because
GDK_DEBUG=gl-disable had been specified.
It's not used there, but both backends have independent
immplementationgs for it.
I want to get rid of GdkGLContextX11 and moving code from it is the
first step.
Now that we have the display's context to hook into, we can use it to
construct other GL contexts and don't need a GdkSurface vfunc anymore.
This has the added benefit that backends can have different GdkGLContext
classes on the display and get new GLContexts generated from them, so
we get multiple GL backend support per GDK backend for free.
I originally wanted to make this a vfunc on GdkGLContextClass, but
it turns out all the abckends would just call g_object_new() anyway.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
The code to create and manage a fake egl surface to bind to is
complex and completely untested because everyone seems to support this
extension.
nvidia and Mesa do support it and according to Mesa devs, adding support
in a new driver is rather simple and Mesa drivers gain that feature
automatically, so all future drivers shoould have it.
... or more exactly: Only use paint contexts with
gdk_cairo_draw_from_gl().
Instead of paint contexts being the only contexts who call swapBuffer(),
any context can be used for this, when it's used with
begin_frame()/end_frame().
This removes 2 features:
1. We no longer need a big sharing hierarchy. All contexts are now
shared with gdk_display_get_gl_context().
2. There is no longer a difference between attached and non-attached
contexts. All contexts work the same way.
The vfunc is called to initialize GL and it returns a "base" context
that GDK then uses as the context all others are shared with. So the GL
context share tree now looks like:
+ context from init_gl
- context1
- context2
...
So this is a flat tree now, the complexity is gone.
The only caveat is that backends now need to create a GL context when
initializing GL so some refactoring was needed.
Two new functions have been added:
* gdk_display_prepare_gl()
This is public API and can be used to ensure that GL has been
initialized or if not, retrieve an error to display (or debug-print).
* gdk_display_get_gl_context()
This is a private function to retrieve the base context from
init_gl(). It replaces gdk_surface_get_shared_data_context().
We try EGL first, but are very picky about what we accept.
If that fails, we try to go with GLX instead.
And if that also fails, we try EGL again, but this time accept anything.
The idea here is that EGL is the preferred method going forward, but GLX is
the tried and tested method that we know works. So if we detect issues with
EGL, we want to avoid using it in favor of GLX.
Also add a GDK_DEBUG=gl-egl option to force EGL at all costs and not try
GLX.
That way, we can give a useful error message when things break down for
users.
These error messages could still be improved in places (like looking at
the actual EGL error codes), but that seemed overkill.
Query the EGL_VISUAL_ID from the egl Config and select a config with the
matching Visual.
This is currently broken on Mesa because it does not expose any RGBA
X Visuals in any EGL config, so we always end up with opaque Windows.
https://gitlab.freedesktop.org/mesa/mesa/-/issues/149
This reverts commit c35a6725b9.
This approach doesn't work because if NVIDIA doesn't work for EGL, the
EGL implementation won't be provided by NVIDIA, so checking the vendor
doesn't work.
Instead, use the display's "leader surface" when no surface is required,
because we have it lying around.
Really, we want to use EGL_NO_SURFACE, but if that's not supported...
Instead of going via GdkVisual, doing a preselection and letting the GL
initialization improve it, let the GL initialization pick an X Visual
directly using X Visual code directly.
The code should select the same visuals as before as it tries to apply
the same logic, but it's a rewrite, so I expect I messed something up.
1. We're using EGL most of the time anyway, so if we wanted to cache
things, we'd need to port it there.
2. Our GL handling is massively configurable, so determining when to use
the cache and when not is a challenge.
3. It makes startup nondeterministic and depend on whether a GTK4 app
has previously been started on this display and nobody thinks about
that when debugging.
4. The only benefit of the caching is delaying GL initialization - which
made sense in GTK3 where almost no app used GL but doesn't make sense
in GTK4 where almost every app uses GL.
So unless I find a big benefit to reintroducing it, this cache will be
gone for good.
Avoids having to use private data, though the benefit is somewhat
limited as we still have to put the destructor in the egl code and can't
just put it in gdk_surface_x11_finalize().
We only have one config, because we use the same Visual everywhere.
Store this config in the GdkDisplayX11 struct for easy access.
Also do this on initialize, because if creating the config fails, we
want to switch to GLX instead of failing to do GL at all.
This also simplifies a lot of code as we can share Visual, Colormap, etc
across surfaces.
There's no need to use g_object_set_data() for it.
We can also stop caching it elsewhere because we know the display has
it.
And finally, we can remove the display->have_egl boolean and use
display->egl_display != NULL instead. We initialize the display at
startup, so that variable is the perfect indicator.
We need to initialize GL to select the Visual we are going to use for
all our Windows.
As the Visual needs to be known before we know if we are even gonna use
GL later, we can't avoid initializing it.
Note that this previously happened, too. It was just hidden behind the
GdkScreen initialization.
We don't want to bind ourselves to GTK3 - both because we don't want to
accidentally cause bugs in a different codebase and because we want to
deviate from it.
While doing so, also store visuals as visuals and not as integers.
And only store one Visual because GTK4 only uses one visual.
And then remove the code that is leftover for dealing with the
compatibility Visual for GTK3.
PS: I'm kinda proud of my STRINGIFY_WITHOUT_BRACKETS hack.
The old code was ordering visuals by depth, but considering that these
days we either use the default visual or a 32bit RGBA visual, that
reordering does not have an effect anymore.
In theory, the only effect is that the GLX Visual selection might select
a different replacement Visual when it checks for improved GL Visuals, but
even there I can't come up with a case where that matters, because
again, the visuals are only reordered by depth and we want to keep the
depth.
In any case, make this a separate commit so bisecting can find this
problem if it ever shows up.
Instead of the display telling the screen to tell the visuals to tell
the display to initialize itself, just init the display directly.
What a concept.
It's only used during DND to allow use of the root window's cow window
as a DND target, because apparently gnome-shell used to think that was a
great idea to DND to the overview.
Somebody complain to gnome-shell devs about it not being a good idea if
they want it fixed.
Potentially using Wayland is a better idea though.
This reverts 85ae875dcb
Related: https://bugzilla.gnome.org/show_bug.cgi?id=601731
Check that we are indeed running inside an Xorg server before enabling
the workaround.
XWayland or other nested X servers deadlock when that workaround is
applied.
Remove a boatload of "or %NULL" from nullable parameters
and return values. gi-docgen generates suitable text from
the annotation that we don't need to duplicate.
This adds a few missing nullable annotations too.
If we want to add an EGL implementation for the X11 backend, we are
going to need to move the GLX bits into their own class. The first step
is to declare GdkX11GLContext as an abstract type, and then subclass it
into a GdkX11GLContextGLX type, which includes the whole GLX
implementation.
The condition we check for to catch X servers going away
may not be accurate anymore, and the warning shows up in
logs, causing customers to be concerned. So, be quiet by
default, unless the user explicitly asked for a message.
The DnD code for X11 adds the composite overlay window (aka COW) to the
cache.
Yet the X11 requests to get and release the COW may trigger XErrors that
we ought to ignore otherwise the client will abort.
Fixes: #3715
If cairo is a subproject, it's not necessarily installed when gtk
is built. In the source tree, cairo's headers are not stored in
a directory called 'cairo'.
We were calling _gdk_surface_update_size() every frame, even if the
window size didn't change. This would cause us to discard all cached
buffers and redraw the whole screen.
This was BAD.
Whenever we communicate targets, we need to the union, otherwise
we don't tell the other side about our serialization. This makes
drops of images from gtk4-icon-browser to gimp and libreoffice
succeed in transferring data.
Fixes: #3654
When creating the output stream for a drop, we must
pass the mimetypes we support, otherwise the picking
of the right handler does not work.
Fixes: #3652
On x11 toplevel layout is not created before toplevel
is presented, but GTK tries to update it on idle
which leads to a crash due to accessing property
of undefined object. Treat soon to be created layout
as a layout with default values upon creation (resizable).
Depending on the input driver, we will get XI_Motion based scroll
events for regular mouse wheels. These are intended to be handled
as discrete scroll, so detect smooth scroll events that move by
exactly 1.0 in either direction.
Fixes: https://gitlab.gnome.org/GNOME/gtk/-/issues/3459
When being fullscreen, and wanting to unfullscreen but not caring about
whether to go unmaximized or maximized (as this information is lost), if
the GdkToplevelLayout represents the full intended state, we won't be
able to do the right thing.
To avoid this issue, make the GdkToplevelLayout API intend based, where
if one e.g. doesn't call gdk_toplevel_set_maximized() with anything, the
backend will not attempt to change the maximized state.
This means we can also remove the old 'initially_maximized' and
'initially_fullscreen' fields from the private GtkWindow struct, as we
only deal with intents now.
It was used by all surfaces to track 'is-mapped', but still part of the
GdkToplevelState, and is now replaced with a separate boolean in the
GdkSurface structure.
It also caused issues when a widget was unmapped, and due to that
unmapped a popover which hid its corresponding surface. When this
surface was hidden, it emitted a state change event, which would then go
back into GTK and queue a resize on popover widget, which would travel
back down to the widget that was originally unmapped, causing confusino
when doing future allocations.
To summarize, one should not hide widgets during allocation, and to
avoid this, make this new is-mapped boolean asynchronous when hiding a
surface, meaning the notification event for the changed mapped state
will be emitted in an idle callback. This avoids the above described
reentry issue.
This will sometimes mean a frame is skipped if a resize was requested
during the update phase of the frame dispatch. Not doing so can cause
trying to allocate a window smaller than the minimum size of the widget.
If compute_size() returns TRUE, the layout will not be propagated to
GTK. This will be used by the X11 backend to queue asynchronous resizes
that shouldn't yet allocate in GTK.
This removes the gdk_surface_set_shadow_width() function and related
vfuncs. The point here is that the shadow width and surface size can now
be communicated to GDK atomically, meaning it's possible to avoid
intermediate stages where the surface size includes the shadow, but
without the shadow width set, or the other way around.
This follows the trail of the Wayland backend in that GdkSurface changes
happen during the layout phase, and that a GDK_CONFIGURE no longer being
used to communicate the size changes of a surface; this now also uses
the layout signal on the GdkSurface.
Reading the comment, it seems to be related being a window manager
decoration utility; this is not something GTK4 aims to handle, just drop
support for this.
The plan is to concencrate size computations as part of the frame clock
dispatch, meaning we shouldn't do it synchronously in the present()
function.
Still, in Wayland, and maybe elsewhere, it is done in the present()
function, e.g. when no state change was made, but this will eventually
be changed.
GTK will not up front know how to correctly calculate a size, since it
will not be able to reliably predict the constraints that may exist
where it will be mapped.
Thus, to handle this, calculate the size of the toplevel by having GDK
emitting a signal called 'compute-size' that will contain information
needed for computing a toplevel window size.
This signal may be emitted at any time, e.g. during
gdk_toplevel_present(), or spontaneously if constraints change.
This also drops the max size from the toplevel layout, while moving the
min size from the toplevel layout struct to the struct passed via the
signal,
This needs changes to a test case where we make sure we process
GDK_CONFIGURE etc, which means we also needs to show the window and
process all pending events in the test-focus-chain test case.