Now that all contexts do that, insist that they keep doing it.
And because they keep doing it, we can support querying the GL version
from gdk_gl_context_get_version() without requiring the context to be
made current.
The EGL spec states:
The context returned must be the specified version, or a later
version which is backwards compatible with that version.
Even if a later version is returned, the specified version
must correspond to a defined version of the client API.
GTK has so far been relying on EGL implementations returning a
later version, because that is what Mesa does.
But ANGLE does not do that and only provides the minimum version, which
means Windows EGL has been forced to use a lower EGL version for no
reason.
So fix this and try versions in order from highest to lowest.
GLES 2.0 version is fine now with current gtk according to B. Otte.
Let's use the same minimum requirement for all implementations.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Fractional scaling with the GL renderer is
experimental for now, so we disable it unless
GDK_DEBUG=gl-fractional is set.
This will give us time to work out the kinks.
This commit combines changes in the Wayland backend,
the GL context frontend, and the GL renderer to switch
them all to use the fractional scale.
In the Wayland backend, we now use the fractional scale
to size the EGL window.
In the GL frontend code, we use the fractional scale to
scale the damage region and surface in begin/end_frame.
And in the GL renderer, we replace gdk_surface_get_scale_factor()
with gdk_surface_get_scale().
... and use this check in gdk_gl_context_make_current() and
gdk_gl_context_get_current() to make sure the context really is still
current.
The context no longer being current can happen when external GL
implementations make their own contexts current in the same threads GDK
contexts are used in.
And that can happen for example by WebKit.
Theoretically, this should also allow external EGL code to run in X11
applications when GDK chooses to use GLX, but I didn't try it.
Fixes#5392
Simplify the API to just return the requirements that the user
has asked for. The rest of the code was undocumented and previously
used as a buggy source for a default value from internal code.
Since the buggy code is now fixed, remove all unnecessary cruft.
There are two reasons for this:
* First, the refactored realize code now makes sure that no
context with unsupported version is ever created.
* Second, this code could bump into false possitives and negatives, since
the user is not requested, nor expected to set_required_version
in any specific order relative to set_allowed_apis. Therefore,
some version could be rejected or accepted based on a set of
allowed apis that the user has not yet correctly configured.
It is useful for backends to get user set preferences while
ensuring the correctness of the result, which will be always
greater or equal than the minimum version provided
Those property features don't seem to be in use anywhere.
They are redundant since the docs cover the same information
and more. They also created unnecessary translation work.
Closes#4904
It appears that NVIDIA does not implement EGL_EXT_swap_buffers_with_damage
on their EGL implementation, but does implement the KHR variant of it.
This checks for a suitable implementation and stores a pointer to the
compatible implementation within the GdkGLContextPrivate struct.
There are situations where our "default framebuffer" is not actually
zero, yet we still want to apply a scissor rect.
Generally, 0 is the default framebuffer. But on platforms where we need
to bind a platform-specific feature to a GL_FRAMEBUFFER, we might have a
default that is not 0. For example, on macOS we bind an IOSurfaceRef to
a GL_TEXTURE_RECTANGLE which then is assigned as the backing store for a
framebuffer. This is different than using gsk_gl_renderer_render_texture()
in that we don't want to incur an extra copy to the destination surface
nor do we even have a way to pass a texture_id into render_texture().
The EGL context that we are actually creating must have matching OpenGL/ES
versions and allowed GL API set with the previously-created EGL context
that will be shared with it so that they can interoperate together, if
applicable.
This will fix the situation by making sure that we request for the
OpenGL/ES version and OpenGL API set that match with what we have in our shared
EGL context. Otherwise, the newly-created EGL context assumed a OpenGL/ES 2.0
context that supported desktop OpenGL, which may not be what we wanted, such as
in the case of libANGLE.
Instead of just passing major/minor, pass them twice, once for GL and
once for GLES. This way, we don't need to check for GL and GLES
separately.
If something is supported unconditionally, passing 0/0 works fine.
That said, I'd like to group the arguments somehow, because otherwise
it's just a confusing list of numbers - but I have no idea how to do
that.
We want critical GL debug messages to be critical, so that the testsuite
sudokus itself when they appear.
This is relevant in particular for GLES warnings in the GLES runner,
because its warnings can cause crashes on GL drivers less forgiving than
Mesa.
Related: #4571
When destroying the EGLSurface or GLXDrawable of a GdkSurface, make sure
the current context is not still bound to it.
If it is, clear the current context.
Fixes#4554
Instead of using GL_BACK, use GL_BACK_LEFT, because the spec demands
this (many drivers don't).
Also move the call from the GDK backends into the GLContext code, as
this is a generic EGL issue (nvidia being the main driver in need of
this call, see 9c4c4eaaa1 for a longer
discussion).
Fixes#4402
This is an alternative to gdk_surface_create_gl_context() when the
context is meant to only draw to textures.
This is useful in the testsuite or in GStreamer or with GLArea,
basically whenever we want to do GL stuff but don't need to actually
draw anything on screen.
A bunch of code will need to be updated to deal with context->surface
being NULL.