Use the debug envvar 'GDK_DEBUG=gl-egl' to determine whether we want to try to
initialize EGL first before trying WGL, as a means for people to more easily
enable EGL support on Windows to test EGL there (such as to debug the shaders,
for instance)
This will clean up the EGL code in GDK-Win32, as well as fixing crashes caused
by using an invalid EGL context in gdk_gl_context_make_current() as we did not
store up the EGL context in the correct place (lost during the transition to
the common EGL initialization code).
On the Windows/libANGLE side, the initialization of EGL has now fully moved to
the common code in GDK, but we will still default on WGL for now. Help is
really appreciated for fixing the shaders on libANGLE!
We need to ensure that gdk_display_get_egl_display() is available even if EGL
is not enabled in the build, so that things will continue to link and work.
For builds without EGL, just return NULL.
This will port the EGL code in GDK-Win32 to use the common GDK code to
initialize EGL. However, at the current state, although EGL is
correctly initialized, this code is disabled for now since
gdk_gl_context_make_current() fails as the shaders do not work for EGL
via libANGLE on Windows.
We can now clean things up in gdkglcontext-win32-egl.c as a result.
Ping/pong serials are not meant to be interpreted as user input serials
(e.g. those given back later to the compositor on grabs). As a matter
of fact, Mutter uses a different count (i.e. timestamps) in these, so
using these serials may confuse the compositor into denying certain
operations like DnD.
Instead of using GL_BACK, use GL_BACK_LEFT, because the spec demands
this (many drivers don't).
Also move the call from the GDK backends into the GLContext code, as
this is a generic EGL issue (nvidia being the main driver in need of
this call, see 9c4c4eaaa1 for a longer
discussion).
Fixes#4402
gdk_display_create_gl_context only returns NULL when there is
an error set or asserts/aborts. So nullalbe annotation isn't needed.
Similar to 53312cf696
This is an alternative to gdk_surface_create_gl_context() when the
context is meant to only draw to textures.
This is useful in the testsuite or in GStreamer or with GLArea,
basically whenever we want to do GL stuff but don't need to actually
draw anything on screen.
A bunch of code will need to be updated to deal with context->surface
being NULL.
Make it use gdk_memory_texture_from_texture().
Also make gdk_memory_format_alpha() privately available so that we can
detect if an image contains an alpha channel.
This is a port of the fix in the quartz backend to the new macOS backend.
From the original commit:
In macOS-12.sdk CGContextConverSizeToDeviceSpace returns a negative
height and passing that to CGContextScaleCTM in turn causes the cairo
surface to draw outside the window where it can't be seen. Passing the
absolute values of the scale factors fixes the display on macOS 12 without
affecting earlier macOS versions.
Don't pass texture + rect, but instead have
gdk_memory_texture_new_subtexture()
and use it to generate subtextures and pass them.
This has the advantage of downloading the a too large texture only once
instead of N times.
Close widget-factory and observe:
Thread 1:
* acquire main loop
* handle close button
* close window
* dispose video and media stream
* stop GstPlayer
WAIT on pipeline stopping
Thread 2:
* prepare next image in pipeline
* hand image to GtkGstSink
* create GdkTexture from image
* gdk_gl_texture_new() determines format
WAIT on acquiring main loop
Sounds like a deadlock?
Indeed, so don't do that.
It does not belong in GdkGLContext, it's a renderer thing.
It's also the only user of that API.
Introduce gdk_gl_context_check_version() private API to make version
checks simpler.
It turns out glReadPixels() cannot convert pixels and you are only
allowed to pass a single value into the function arguments. You need to
know which ones or things will explode.
GL is great.
Pass a format do GdkTextureClass::download(). That way we can download
data in any format.
Also replace gdk_texture_download_texture() with
gdk_memory_texture_from_texture() which again takes a format.
The old functionality is still there for code that wants it: Just pass
gdk_texture_get_format (texture) as the format argument.
Broadway is the only GTK+ backend that throws an error on stderr for a
"display server" connection failure.
This causes problems when gtk_init_check() is used and unexpected error
output is generated such as with hotdoc, which fails when generating a
GTK plugin's documentation instead of overlooking the issue.
"Unable to init server: Could not connect: Connection refused"
Broadway is the only GTK+ backend that throws an error on stderr when
failing to initialise, which causes problems when gtk_init_check() is
used and unexpected error output is generated.
This causes hotdoc to fail when generating a GTK plugin's documentation
instead of failing quietly.
"Unable to init server: Could not connect: Connection refused"
Otherwise if we hide and show a window we recreate a new surface,
breaking the compositor's association, but potentially not resend this
data for the new surface.
This matches what we do for input_region.
Add gdk_gl_context_is_api_allowed() for backends and make them use it.
Finally, have them return the final API as the return value (or 0 on
error).
And then use that api instead of a use_es boolean flag.
Fixes#4221
For MemoryTexture, this is a simple change.
For GLTexture, we need to query the format at texture creation. This
sounds like a bad idea and extra work until one realizes that we'd
need to do that anyway when using the texure the first time - either
when downloading, or when trying to use it in a rendernode, where we
will soon need that information to determine if the texture prefers high
depth.
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
Also, now make gdk_memory_convert() the only conversion functions
and allow conversions between any 2 formats by going via a float[4].
This could be optimized via fast-paths, but so far it isn't.
If EGL supports:
* no-config contexts
* >8bits pixel formats
* (optionally) floating point pixel formats
Then select such a profile as the HDR format and use it when HDR is
requested.
Forces request_hdr = TRUE for all requests.
Backends should also use this when choosing whether to honor HDR
requests for low quality compositors - as long as the compositor
pretends to support HDR, shovel HDR at it.
Unify the X11 and Wayland EGL contexts.
This is a bit ugly to implement, because I don't want to create an
interface and I can't make them inherit from the same object, because
one needs to inherit from X11GLContext and the other from
WaylandGLContext.
So we have to put the code in GdkGLContext and make sure non-EGL
contexts can't accidentally run it. This is rather easy because we can
just check for priv->egl_context != NULL.