Sometimes, GLX can decide to use the previous request serial when faking
XErrors via __glXSendError() (look through the Mesa sources to enjoy).
This can cause the error trap we just installed to not feel responsible
for the error. And that makes GDK decide to immediately abort the
application.
That is not what we or GLX want.
So we use a no-op X Request to bump the request number so that when GLX
does its shenanigans, it uses a serial that our error trap will catch.
Fixes a crash in mutter's CI which apparently manages to drive GLX
without an X server.
In error cases, glXCreateContextAttribsARB() will always return NULL so
it is enough to run the loop until the first non-NULL context is
returned.
And at that point, we can just look at the return value and ignore all
errors.
Instead of trapping errors for the whole loop trying to create GL
contexts, trap them once per GL context.
Apparently GLX does throw an error when a too high version is requested
and doesn't just return NULL and then that error lingers when we try
lower versions.
Fixes#5857
XWayland (at least on gnome-shell) does not support SGI_swap_control,
which we were using to unset the swap interval.
It does support EXT_swap_control though, which is the more modern
version of the same thing, so this commit adds support for that.
And now GDK_DEBUG=no-vsync gives me >1000fps instead of just 60fps,
GLES 2.0 version is fine now with current gtk according to B. Otte.
Let's use the same minimum requirement for all implementations.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
... and use this check in gdk_gl_context_make_current() and
gdk_gl_context_get_current() to make sure the context really is still
current.
The context no longer being current can happen when external GL
implementations make their own contexts current in the same threads GDK
contexts are used in.
And that can happen for example by WebKit.
Theoretically, this should also allow external EGL code to run in X11
applications when GDK chooses to use GLX, but I didn't try it.
Fixes#5392
Mimic the behavior of the egl context creation by stablishing
some sane logic for the api and version used. Split the decision
of the type of context (api, legacy) and the creation of a context
of a certain version and all its properties.
When destroying the EGLSurface or GLXDrawable of a GdkSurface, make sure
the current context is not still bound to it.
If it is, clear the current context.
Fixes#4554
Add gdk_gl_context_is_api_allowed() for backends and make them use it.
Finally, have them return the final API as the return value (or 0 on
error).
And then use that api instead of a use_es boolean flag.
Fixes#4221
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5
It's not used there, but both backends have independent
immplementationgs for it.
I want to get rid of GdkGLContextX11 and moving code from it is the
first step.
Now that we have the display's context to hook into, we can use it to
construct other GL contexts and don't need a GdkSurface vfunc anymore.
This has the added benefit that backends can have different GdkGLContext
classes on the display and get new GLContexts generated from them, so
we get multiple GL backend support per GDK backend for free.
I originally wanted to make this a vfunc on GdkGLContextClass, but
it turns out all the abckends would just call g_object_new() anyway.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
... or more exactly: Only use paint contexts with
gdk_cairo_draw_from_gl().
Instead of paint contexts being the only contexts who call swapBuffer(),
any context can be used for this, when it's used with
begin_frame()/end_frame().
This removes 2 features:
1. We no longer need a big sharing hierarchy. All contexts are now
shared with gdk_display_get_gl_context().
2. There is no longer a difference between attached and non-attached
contexts. All contexts work the same way.
The vfunc is called to initialize GL and it returns a "base" context
that GDK then uses as the context all others are shared with. So the GL
context share tree now looks like:
+ context from init_gl
- context1
- context2
...
So this is a flat tree now, the complexity is gone.
The only caveat is that backends now need to create a GL context when
initializing GL so some refactoring was needed.
Two new functions have been added:
* gdk_display_prepare_gl()
This is public API and can be used to ensure that GL has been
initialized or if not, retrieve an error to display (or debug-print).
* gdk_display_get_gl_context()
This is a private function to retrieve the base context from
init_gl(). It replaces gdk_surface_get_shared_data_context().
That way, we can give a useful error message when things break down for
users.
These error messages could still be improved in places (like looking at
the actual EGL error codes), but that seemed overkill.
Instead of going via GdkVisual, doing a preselection and letting the GL
initialization improve it, let the GL initialization pick an X Visual
directly using X Visual code directly.
The code should select the same visuals as before as it tries to apply
the same logic, but it's a rewrite, so I expect I messed something up.
1. We're using EGL most of the time anyway, so if we wanted to cache
things, we'd need to port it there.
2. Our GL handling is massively configurable, so determining when to use
the cache and when not is a challenge.
3. It makes startup nondeterministic and depend on whether a GTK4 app
has previously been started on this display and nobody thinks about
that when debugging.
4. The only benefit of the caching is delaying GL initialization - which
made sense in GTK3 where almost no app used GL but doesn't make sense
in GTK4 where almost every app uses GL.
So unless I find a big benefit to reintroducing it, this cache will be
gone for good.
We don't want to bind ourselves to GTK3 - both because we don't want to
accidentally cause bugs in a different codebase and because we want to
deviate from it.
While doing so, also store visuals as visuals and not as integers.
And only store one Visual because GTK4 only uses one visual.
And then remove the code that is leftover for dealing with the
compatibility Visual for GTK3.
PS: I'm kinda proud of my STRINGIFY_WITHOUT_BRACKETS hack.
Check that we are indeed running inside an Xorg server before enabling
the workaround.
XWayland or other nested X servers deadlock when that workaround is
applied.
If we want to add an EGL implementation for the X11 backend, we are
going to need to move the GLX bits into their own class. The first step
is to declare GdkX11GLContext as an abstract type, and then subclass it
into a GdkX11GLContextGLX type, which includes the whole GLX
implementation.