Make a deep texture, if the render nodes have
high depth content.
For now, we use 32F here for the deep format,
since using 16F causes small rounding errors
that break the memorytexture roundtrip tests.
Look at the framebuffer and the rendernode to
determine what format to use for intermediate
textures.
Our preference here is to use fp16, if we have it
and it makes sense for the framebuffer we're given.
Add private api to find out if the content
of a render node should be considered 'deep'.
The information is collected at creation time,
so there is no tree-walking involved when we
are using this information in the renderer.
Currently, this comes down to whether there are
any texture nodes with high depth textures in the subtree.
In the future, we may want to allow marking gradient
nodes in this way as well.
For MemoryTexture, this is a simple change.
For GLTexture, we need to query the format at texture creation. This
sounds like a bad idea and extra work until one realizes that we'd
need to do that anyway when using the texure the first time - either
when downloading, or when trying to use it in a rendernode, where we
will soon need that information to determine if the texture prefers high
depth.
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
Also, now make gdk_memory_convert() the only conversion functions
and allow conversions between any 2 formats by going via a float[4].
This could be optimized via fast-paths, but so far it isn't.
If EGL supports:
* no-config contexts
* >8bits pixel formats
* (optionally) floating point pixel formats
Then select such a profile as the HDR format and use it when HDR is
requested.
Forces request_hdr = TRUE for all requests.
Backends should also use this when choosing whether to honor HDR
requests for low quality compositors - as long as the compositor
pretends to support HDR, shovel HDR at it.
Unify the X11 and Wayland EGL contexts.
This is a bit ugly to implement, because I don't want to create an
interface and I can't make them inherit from the same object, because
one needs to inherit from X11GLContext and the other from
WaylandGLContext.
So we have to put the code in GdkGLContext and make sure non-EGL
contexts can't accidentally run it. This is rather easy because we can
just check for priv->egl_context != NULL.
Quietly export this function mainly for the benefit
of libadwaita, which can can use this to install its
implementation of the gtk-inspector-page extension
point.
We have a global GdkGLBackendType now, just set it.
This way, using the variable forces the backend type, and we don't need
special code handling the env vars in the backends.
It also means setting the env var will now "work" on GDK backends that
don't even support that GL backend and simualte another GDK backend
having registered that GL backend already. So you can run
GDK_DEBUG=gl-wgl gtk4-demo
on test what Wayland will do when WGL is in use.