There's no need for EGL to do any timing, we do it in GTK already.
This fixes hangs in Mesa when we hide a surface after a SwapBuffers()
but before the frame callback arrives.
If we then reshow the surface and immediately render to it, Mesa would
still have a frame callback from before the hiding and forever poll()
waiting for the compositor to send the callback.
Fixes#5761
Otherwise GL surfaces that redraw without changing the hotspot have it
applied on top every frame and quickly slide away.
The cairo path and the X11 backend do not have this bug.
The availability of wl_surface.offset depends on the compositor, so we
can't call it unconditionally. Add a version check to so we only call
offset if we know we won't raise a protocol error.
Fixes: 0eb791eaaa ("Make mask nodes more versatile")
Instead of using GL_BACK, use GL_BACK_LEFT, because the spec demands
this (many drivers don't).
Also move the call from the GDK backends into the GLContext code, as
this is a generic EGL issue (nvidia being the main driver in need of
this call, see 9c4c4eaaa1 for a longer
discussion).
Fixes#4402
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
Unify the X11 and Wayland EGL contexts.
This is a bit ugly to implement, because I don't want to create an
interface and I can't make them inherit from the same object, because
one needs to inherit from X11GLContext and the other from
WaylandGLContext.
So we have to put the code in GdkGLContext and make sure non-EGL
contexts can't accidentally run it. This is rather easy because we can
just check for priv->egl_context != NULL.
Print the extensions one per line, and sort them
alphabetically, so it is actually possible to find
something in the list.
Also print a short description of the chosen config.
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5
Goals:
1. Provide as much information as possible in the error message, so
users can try to fix their system themselves.
2. Try to formulate the error message in a way that explains that this
is not something GTK can fix, but a lower layer problem.
Related: #4193
nvidia sets the default draw buffer to GL_NONE if EGL contexts are
initially bound to EGL_NO_SURFACE which is exactly what we are doing. So
bind them to GL_BACK when drawing, as they should be.
See https://phabricator.services.mozilla.com/D118743 for a discussion
about EGL_NO_CONTEXT and draw buffers.
Now that we have the display's context to hook into, we can use it to
construct other GL contexts and don't need a GdkSurface vfunc anymore.
This has the added benefit that backends can have different GdkGLContext
classes on the display and get new GLContexts generated from them, so
we get multiple GL backend support per GDK backend for free.
I originally wanted to make this a vfunc on GdkGLContextClass, but
it turns out all the abckends would just call g_object_new() anyway.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
The code to create and manage a fake egl surface to bind to is
complex and completely untested because everyone seems to support this
extension.
nvidia and Mesa do support it and according to Mesa devs, adding support
in a new driver is rather simple and Mesa drivers gain that feature
automatically, so all future drivers shoould have it.
... or more exactly: Only use paint contexts with
gdk_cairo_draw_from_gl().
Instead of paint contexts being the only contexts who call swapBuffer(),
any context can be used for this, when it's used with
begin_frame()/end_frame().
This removes 2 features:
1. We no longer need a big sharing hierarchy. All contexts are now
shared with gdk_display_get_gl_context().
2. There is no longer a difference between attached and non-attached
contexts. All contexts work the same way.
The vfunc is called to initialize GL and it returns a "base" context
that GDK then uses as the context all others are shared with. So the GL
context share tree now looks like:
+ context from init_gl
- context1
- context2
...
So this is a flat tree now, the complexity is gone.
The only caveat is that backends now need to create a GL context when
initializing GL so some refactoring was needed.
Two new functions have been added:
* gdk_display_prepare_gl()
This is public API and can be used to ensure that GL has been
initialized or if not, retrieve an error to display (or debug-print).
* gdk_display_get_gl_context()
This is a private function to retrieve the base context from
init_gl(). It replaces gdk_surface_get_shared_data_context().
Create it during init and then reuse it for all contexts.
While doing that, also improve error reporting - that's not used yet but
will in later commits.
This lets the NGL backend be selected instead of the Cairo backend on
devices which expose both GL and GLES, but have better support of GLES.
Tested on a PinePhone.
We must wl_surface.commit after xdg_surface.ack_configure to make it
have an effect. We failed to do so when a configure event didn't result
in new updates, so make sure we fall back on an simple
wl_surface.commit if there was no new actual frame painted.
Closes: #2910
usec is the scale of the monotonic timer which is where we get almost
all the times from. The only actual source of nsec is the opengl
GPU time (but who knows what the actual resulution of that is).
Changing this to usec allows us to get rid of " * 1000" in a *lot* of
places all over the codebase, which are ugly and confusing.
Add marks for when we do commits, swap buffer or
receive frame events. These are the low-level start
and end points of the frame cycle, and it is useful
to see them in the profiler.
We don't need the complicated wrapper system anymore,
since client-side windows are gone. This commit moves
all the vfuncs to GtkSurfaceClass, and changes the
backends to just derive their surface implementation
from GdkSurface.
As per the spec:
> The back buffer can
> either be reported as invalid (has an age of 0) or it may be
> reported to contain the contents from n frames prior to the
> current frame.
So a buffer age of 1 means that the buffer was used in the last frame.
We were handling buffer_age==1 the same as buffer_age==0, i.e. we
returned the full damage for the surface.
[1] https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_buffer_age.txt
We used to pass 2 regions to GdkDrawCotnext.end_frame() but code was
confusing what they meant. So we now don't do that anymore and only pass
the region that matters: The frame region.
This makes the previous gdk_draw_context_is_drawing() function public
under a new name.
I decided against the old name because we use the term "frame" for a
drawing operation, so I wanted to have this boolean flag reuse the term.