It appears that NVIDIA does not implement EGL_EXT_swap_buffers_with_damage
on their EGL implementation, but does implement the KHR variant of it.
This checks for a suitable implementation and stores a pointer to the
compatible implementation within the GdkGLContextPrivate struct.
There are situations where our "default framebuffer" is not actually
zero, yet we still want to apply a scissor rect.
Generally, 0 is the default framebuffer. But on platforms where we need
to bind a platform-specific feature to a GL_FRAMEBUFFER, we might have a
default that is not 0. For example, on macOS we bind an IOSurfaceRef to
a GL_TEXTURE_RECTANGLE which then is assigned as the backing store for a
framebuffer. This is different than using gsk_gl_renderer_render_texture()
in that we don't want to incur an extra copy to the destination surface
nor do we even have a way to pass a texture_id into render_texture().
The EGL context that we are actually creating must have matching OpenGL/ES
versions and allowed GL API set with the previously-created EGL context
that will be shared with it so that they can interoperate together, if
applicable.
This will fix the situation by making sure that we request for the
OpenGL/ES version and OpenGL API set that match with what we have in our shared
EGL context. Otherwise, the newly-created EGL context assumed a OpenGL/ES 2.0
context that supported desktop OpenGL, which may not be what we wanted, such as
in the case of libANGLE.
Instead of just passing major/minor, pass them twice, once for GL and
once for GLES. This way, we don't need to check for GL and GLES
separately.
If something is supported unconditionally, passing 0/0 works fine.
That said, I'd like to group the arguments somehow, because otherwise
it's just a confusing list of numbers - but I have no idea how to do
that.
We want critical GL debug messages to be critical, so that the testsuite
sudokus itself when they appear.
This is relevant in particular for GLES warnings in the GLES runner,
because its warnings can cause crashes on GL drivers less forgiving than
Mesa.
Related: #4571
When destroying the EGLSurface or GLXDrawable of a GdkSurface, make sure
the current context is not still bound to it.
If it is, clear the current context.
Fixes#4554
Instead of using GL_BACK, use GL_BACK_LEFT, because the spec demands
this (many drivers don't).
Also move the call from the GDK backends into the GLContext code, as
this is a generic EGL issue (nvidia being the main driver in need of
this call, see 9c4c4eaaa1 for a longer
discussion).
Fixes#4402
This is an alternative to gdk_surface_create_gl_context() when the
context is meant to only draw to textures.
This is useful in the testsuite or in GStreamer or with GLArea,
basically whenever we want to do GL stuff but don't need to actually
draw anything on screen.
A bunch of code will need to be updated to deal with context->surface
being NULL.
It does not belong in GdkGLContext, it's a renderer thing.
It's also the only user of that API.
Introduce gdk_gl_context_check_version() private API to make version
checks simpler.
Add gdk_gl_context_is_api_allowed() for backends and make them use it.
Finally, have them return the final API as the return value (or 0 on
error).
And then use that api instead of a use_es boolean flag.
Fixes#4221
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
Also, now make gdk_memory_convert() the only conversion functions
and allow conversions between any 2 formats by going via a float[4].
This could be optimized via fast-paths, but so far it isn't.
If EGL supports:
* no-config contexts
* >8bits pixel formats
* (optionally) floating point pixel formats
Then select such a profile as the HDR format and use it when HDR is
requested.
Unify the X11 and Wayland EGL contexts.
This is a bit ugly to implement, because I don't want to create an
interface and I can't make them inherit from the same object, because
one needs to inherit from X11GLContext and the other from
WaylandGLContext.
So we have to put the code in GdkGLContext and make sure non-EGL
contexts can't accidentally run it. This is rather easy because we can
just check for priv->egl_context != NULL.
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5
Now that we have the display's context to hook into, we can use it to
construct other GL contexts and don't need a GdkSurface vfunc anymore.
This has the added benefit that backends can have different GdkGLContext
classes on the display and get new GLContexts generated from them, so
we get multiple GL backend support per GDK backend for free.
I originally wanted to make this a vfunc on GdkGLContextClass, but
it turns out all the abckends would just call g_object_new() anyway.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
... or more exactly: Only use paint contexts with
gdk_cairo_draw_from_gl().
Instead of paint contexts being the only contexts who call swapBuffer(),
any context can be used for this, when it's used with
begin_frame()/end_frame().
This removes 2 features:
1. We no longer need a big sharing hierarchy. All contexts are now
shared with gdk_display_get_gl_context().
2. There is no longer a difference between attached and non-attached
contexts. All contexts work the same way.
Do not treat the context as already current when the value
of context::in-frame changes.
This is so we can bind to EGL_NO_SURFACE if context::in-frame == false
and to context::surface if context::in-frame == true.
This allows getting rid of the attached property in future commits.