If shaders don't support nonuniform indexing, we emulate it via if/else
ladders (or switch ladders) which get inlined by the GLSL compiles and
massively blow up the code.
And that makes compilation of the shaders take minutes and results in
shader code that isn't necessarily faster.
So we disable it on GL entirely and on Vulkan if the required features
aren't available.
As it's only an optimization and does not fall back to Cairo anymore,
this should be fine.
I did it because it unifies the code.
But it also gains the benefit of being debuggable because it can
now be turned off via GDK_VULKAN_SKIP=incremental-present
This ensures both that we signal a semaphore for a dmabuf when we export
an image and that we import semaphores for dmabufs and wait on them.
Fixes Vulkan node-editor displaying the Vulkan renderer in the sidebar.
This code does not add a downloader, so we do not claim support for all
the new formats.
It just queries the formats. But this can be used to import dmabufs
directly into the Vulkaan renderer.
use it to collect the optional features we are interested in and turn
them on only if available.
For now we add the dmabuf features, but we don't use them yet.
Add an implementation of GdkDmabufDownloader that uses
gsk_renderer_render_texture + GL texture download.
Since gsk isn't threadsafe, we do the download in the main thread,
taking care to not disturb the current GL context of whatever is
going on there at the time.
And since gsk renderers are expensive to create, we cache it
in the display.
Note that gsk does not yet have any special support for
dmabuf textures, so for now, they will always get downloaded
and then reuploaded as GL textures.
GdkDmabuf is a struct encapsulating all the values of a dmabuf, so
nothing to see here.
GdkDmabufDownloader is a vtable for a thing that can download dmabufs.
For now only one implementation exists, so this just looks like a ton
of work for no benefit.
The only neat thing is that gdkdmabuftexture.c got a whole lot tidier.
These are the dmabuf formats that we can import
into a GL context as an EGLImage, and successfully
download.
We skip the GdkDisplay:dmabuf-formats property
in the default value tests, since the nominal
default value is NULL, but the actual value is
constructed on demand.
Have a resource path => vkShaderModule hash table instead of doing fancy
custom objects.
A benefit is that shader modules are now shared between all renderers
and pipelines.
Make the display handle the cache, because we only need one.
We store the cache in
$CACHE_DIR/gtk-4.0/vulkan-pipeline-cache/$UUID.$VERSION
so we regenerate caches for each different device (different UUID) and
each different driver version.
We also keep track of the etag of the cache file, so if 2 different
applications update the cache, we can detect that.
Vulkan allows merging caches, so the 2nd app reloads the new cache file
and merges it into its cache before saving.
Pretty much copy what GL does and just use the default display to create
GPU-related resources without the need for a display.
This also adds gdk_display_create_vulkan_context() but I've
kept it private because the Vulkan API is generally considered in flux,
in particular with our pending attempts to redo how renderers work.
GLES 2.0 version is fine now with current gtk according to B. Otte.
Let's use the same minimum requirement for all implementations.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
It appears that NVIDIA does not implement EGL_EXT_swap_buffers_with_damage
on their EGL implementation, but does implement the KHR variant of it.
This checks for a suitable implementation and stores a pointer to the
compatible implementation within the GdkGLContextPrivate struct.
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
If EGL supports:
* no-config contexts
* >8bits pixel formats
* (optionally) floating point pixel formats
Then select such a profile as the HDR format and use it when HDR is
requested.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
The vfunc is called to initialize GL and it returns a "base" context
that GDK then uses as the context all others are shared with. So the GL
context share tree now looks like:
+ context from init_gl
- context1
- context2
...
So this is a flat tree now, the complexity is gone.
The only caveat is that backends now need to create a GL context when
initializing GL so some refactoring was needed.
Two new functions have been added:
* gdk_display_prepare_gl()
This is public API and can be used to ensure that GL has been
initialized or if not, retrieve an error to display (or debug-print).
* gdk_display_get_gl_context()
This is a private function to retrieve the base context from
init_gl(). It replaces gdk_surface_get_shared_data_context().
... and move some members from the GdkDisplay struct.
We've always wanted to add one to isolate the display from the backends
a bit more, but so far it's never happened.
Now that I'm about to add more data to GdkDisplay, it's a good excuse to
start.
No users of gdk_display_peek_event, gdk_display_has_pending
_gdk_display_event_data_copy or _gdk_display_event_data_free,
so drop all of these, and related vfuncs.
Without a way to create events, there is no point
in allowing gdk_display_put_event to be used from
the outside. And little good can come out of using
the other apis, so just make them all private.