For MemoryTexture, this is a simple change.
For GLTexture, we need to query the format at texture creation. This
sounds like a bad idea and extra work until one realizes that we'd
need to do that anyway when using the texure the first time - either
when downloading, or when trying to use it in a rendernode, where we
will soon need that information to determine if the texture prefers high
depth.
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
Also, now make gdk_memory_convert() the only conversion functions
and allow conversions between any 2 formats by going via a float[4].
This could be optimized via fast-paths, but so far it isn't.
If EGL supports:
* no-config contexts
* >8bits pixel formats
* (optionally) floating point pixel formats
Then select such a profile as the HDR format and use it when HDR is
requested.
Forces request_hdr = TRUE for all requests.
Backends should also use this when choosing whether to honor HDR
requests for low quality compositors - as long as the compositor
pretends to support HDR, shovel HDR at it.
Unify the X11 and Wayland EGL contexts.
This is a bit ugly to implement, because I don't want to create an
interface and I can't make them inherit from the same object, because
one needs to inherit from X11GLContext and the other from
WaylandGLContext.
So we have to put the code in GdkGLContext and make sure non-EGL
contexts can't accidentally run it. This is rather easy because we can
just check for priv->egl_context != NULL.
We have a global GdkGLBackendType now, just set it.
This way, using the variable forces the backend type, and we don't need
special code handling the env vars in the backends.
It also means setting the env var will now "work" on GDK backends that
don't even support that GL backend and simualte another GDK backend
having registered that GL backend already. So you can run
GDK_DEBUG=gl-wgl gtk4-demo
on test what Wayland will do when WGL is in use.
Print the extensions one per line, and sort them
alphabetically, so it is actually possible to find
something in the list.
Also print a short description of the chosen config.
Print the extensions one per line, and sort them
alphabetically, so it is actually possible to find
something in the list.
Also print a short description of the chosen config.
Include the appropriate headers as some function prototypes were moved lately.
Also, re-order the include order of the gdk/*private.h headers alphabetically
in the files that were updated.
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5
On Windows, GLES is not that widely available unless one installs wrapper
libraries such as libANGLE, so GLES/EGL support on Windows is used more like
a fallback mode if Desktop OpenGL (WGL) support is inadequate on the system.
Hence, unless one forces WGL or EGL, we will first try to initialize WGL, and
then try to initialize GLES if enabled and if WGL initialization failed, and
then just return whatever the last result we can obtain from these
initialization attempts, since unlike X11 EGL contexts, we do not have
separate modes for WGL except for legacy and non-legacy contexts.
We were setting the WGL pixel format in GdkWin32Display too early, so the code
does not bail out correctly when we retry establishing the WGL context.
Fix this by pushing back setting the WGL pixel format only after it passes the
shader availability check.
Should fix issue #4257.
1. Change INSUFFICIENT_MEMORY to TOO_LARGE
GTK crashes on insufficient memory, we don't emit GErrors.
2. Split UNSUPPORTED into UNSUPPORTED_CONTENT and UNSUPPORTED_FORMAT
So we know if you need to find an RPM with a loader or curse and
the weird file.
3. Translate error messages, they are meant for end users.
When loading, convert all >8-bit data to
GDK_MEMORY_R16G16B16A16_PREMULTIPLIED.
When saving, save all 8-bit formats as 8-bit RGBA,
and save all >8-bt formats as 16-bit RGBA.
Use our own loader to (de)serialiaze textures
to and from png and tiff.
We still fall back to gdk-pixbuf for handling all
the other image formats, and for pixbufs.
This is a companion to gdk_texture_save_to_png, using
the tiff format, which will let us avoid lossy conversion
of HDR data, since we can store floating point data.
Add support for the tiff format, which is flexible
enough to handle all our memory texture formats
without loss.
As a consequence, we are now linking against libtiff.
Using libpng instead of the lowest-common-denominator
gdk-pixbuf loader. This will allow us to load >8bit data,
and apply gamma and color correction in the future.
For now, this still just provides RGBA8 data.
As a consequence, we are now linking against libpng.
GLES only allows downloading float if the texture matches specific
criteria and I'm too lazy to determine them, so always fall back.
And the custom stride fallback code isn't necessary, because falling
back does exactly that step already.
This happens in the real world when using the inspector to look at a
node recording of a GStreamer video while the video is still playing.
GStreamer will use the GL context in a different thread while we are
busy trying to download it.
A test is included.
1. The download via gdk_cairo_draw_from_gl() was broken sometimes
2. We get easy conversion on fallback by chaining up and using
download_texture().
3. One more place where Cairo is no longer necessary.
1. It avoids Cairo, and in particular conversion to Cairo.
2. Keeping a texture allows easy chaining in the vfuncs.
3. Using a texture means releasing will work for HDR formats
too, once we add them.
A private vfunc that downloads a texture as a GdkMemoryTexture in
whatever format the texture deems best.
There are multiple reasons for this:
* GLES cannot download the Cairo format. But it can download some
format and then just delegate to the GdkMemoryTexture implementation.
* All the other download vfuncs (including the ones still coming) can
be implemented via download_texture() and delegation, making the
interface easier.
* We want to implement image loading and saving support. By using
download_texture(), we can save in the actual format of the texture.
* A potential GdkCompressedTexture could be implemented by just
providing this one vfunc as a compress() step.
It seems these are sent with `xwindow` set to the root window, so this
was failing to find a surface and get the screen from that.
I'm not sure if there's a reason not to get the screen this way
elsewhere in the function, but it seems this should be correct.
This fixes the behavior of `gdk_x11_display_get_monitors()`, which
wasn't correctly changing when monitors were added or removed. For
instance, this python code was always showing the same number of
monitors when one was turned off and on, but updates correctly with this
change applied:
```python
import gi
gi.require_version("GLib", "2.0")
gi.require_version("Gdk", "4.0")
gi.require_version("Gtk", "4.0")
from gi.repository import GLib, Gdk, Gtk
def f():
print(len(Gdk.Display.get_default().get_monitors()))
return True
GLib.timeout_add_seconds(1, f)
GLib.MainLoop().run()
```
_gdk_macos_event_source_new() calls g_source_set_static_name(), which
for GLib versions before 2.69.1 is a macro defined in gdk-private.h.
Fixes#4195
modified: gdk/macos/gdkmacoseventsource.c
Goals:
1. Provide as much information as possible in the error message, so
users can try to fix their system themselves.
2. Try to formulate the error message in a way that explains that this
is not something GTK can fix, but a lower layer problem.
Related: #4193
When we initialize OpenGL, check whether we have OpenGL 2.0 or later; if not,
check whether we have the 'GL_ARB_shader_objects' extension, since we must be
able to support shaders if using OpenGL for GTK.
If we don't support shaders, as some Windows graphics drivers do not support
OpenGL adequately, notably older Intel drivers, reject and destroy the GL
context that we created, and so fallback to the Cairo GSK renderer, so that
things continue to run, albeit with an expected warning message that the GL
context cannot be realized.
Also, when we could not make the created dummy WGL context current during
initialization, make sure that we destroy the dummy WGL context as well.
Fixes issue #4165.