Make the display handle the cache, because we only need one.
We store the cache in
$CACHE_DIR/gtk-4.0/vulkan-pipeline-cache/$UUID.$VERSION
so we regenerate caches for each different device (different UUID) and
each different driver version.
We also keep track of the etag of the cache file, so if 2 different
applications update the cache, we can detect that.
Vulkan allows merging caches, so the 2nd app reloads the new cache file
and merges it into its cache before saving.
It's necessary now that we use storage buffers for gradients:
[ VUID-VkDescriptorSetLayoutBindingFlagsCreateInfo-descriptorBindingStorageBufferUpdateAfterBind-03008 ] Object 0: handle = 0x1e72d70, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x943cc552 | vkCreateDescriptorSetLayout(): pBindings[0] can't have VK_DESCRIPTOR_BINDING_UPDATE_AFTER_BIND_BIT for VK_DESCRIPTOR_TYPE_STORAGE_BUFFER since descriptorBindingStorageBufferUpdateAfterBind is not enabled. The Vulkan spec states: If VkPhysicalDeviceDescriptorIndexingFeatures::descriptorBindingStorageBufferUpdateAfterBind is not enabled, all bindings with descriptor type VK_DESCRIPTOR_TYPE_STORAGE_BUFFER must not use VK_DESCRIPTOR_BINDING_UPDATE_AFTER_BIND_BIT (https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorSetLayoutBindingFlagsCreateInfo-descriptorBindingStorageBufferUpdateAfterBind-03008)
Pretty much copy what GL does and just use the default display to create
GPU-related resources without the need for a display.
This also adds gdk_display_create_vulkan_context() but I've
kept it private because the Vulkan API is generally considered in flux,
in particular with our pending attempts to redo how renderers work.
Replace gdk_memory_format_prefers_high_depth with the more generic
gdk_memory_format_get_depth() that returns the depth of the individual
channels.
Also make the GL renderer use that to pick the generic F16 format
instead of immediately going for F32 when uploading textures.
The GDK_SEAT_CAPABILITY_TABLET_PAD stood awkwardly out of the
ALL value. Even though it's not a keyboard, its focus has more
resemblance to it, so it should be part of this group together
with keyboards.
We were creating the pad device on wp_tablet_pad.done, but
at that time we do not know what tablet it is associated with,
thus we cannot get appropriate vid/pid/name properties for it.
To get that, we need to wait for the pad to enter a surface,
at that time we do know what tablet it is associated with, so
we can get better information about the device.
There are pads that may plausibly "change" tablet between
one .enter event and the next (e.g. Wacom Express Key Remote),
but this situation is highly unlikely. The pad devices created
are thus persistent until that situation happens.
Sometimes, GLX can decide to use the previous request serial when faking
XErrors via __glXSendError() (look through the Mesa sources to enjoy).
This can cause the error trap we just installed to not feel responsible
for the error. And that makes GDK decide to immediately abort the
application.
That is not what we or GLX want.
So we use a no-op X Request to bump the request number so that when GLX
does its shenanigans, it uses a serial that our error trap will catch.
Fixes a crash in mutter's CI which apparently manages to drive GLX
without an X server.
In error cases, glXCreateContextAttribsARB() will always return NULL so
it is enough to run the loop until the first non-NULL context is
returned.
And at that point, we can just look at the return value and ignore all
errors.
Instead of having a descriptor set per operation, we just have one
descriptor set and bind all our images into it.
Then the shaders get to use an index into the large texture array
instead.
Getting this to work - because it's a Vulkan extension that needs to be
manually enabled, even though it's officially part of Vulkan 1.2 - is
insane.
Instead of trapping errors for the whole loop trying to create GL
contexts, trap them once per GL context.
Apparently GLX does throw an error when a too high version is requested
and doesn't just return NULL and then that error lingers when we try
lower versions.
Fixes#5857
We may try to update the XRR outputs and Crtcs when they're changing in
the server, and so we may get BadRROutput that we're currently not
handling properly.
As per this, use traps and check whether we got errors, and if we did
let's ignore the current output.
It's not required to call init_randr13() again because if we got errors
it's very likely that there's a change coming that will be notified at
next iteration during which we'll repeat the init actions.
For non-gles, make it handle unpremultiplied formats,
and everything else, by downloading the texture in its
preferred format and, in most cases, doing a
gdk_memory_convert afterwards.
For gles, keep using glReadPixels, but handle cases
where the gl read format doesn't match the texture
format by doing the necessary swizzling before calling
gdk_memory_convert.
Make the callers of this function check for
straight alpha themselves, and only do the
version compatibility check here. This makes
the function usable in contexts where straight
alpha is acceptable.
Use &__ImageBase for the GTK DLL and GetModuleHandle (NULL)
for the application module. Then remove DllMain as it's not
necessary anymore.
References:
[1] Accessing the current module's HINSTANCE from a static library:
https://devblogs.microsoft.com/oldnewthing/20041025-00/?p=37483
The display xevent signal connection takes the ownership of the stream
until we get a valid event, so it should manage the stream lifetime.
So make this clearer, by automatically removing the stream reference
when we disconnect from the xevent signal handler.
We create a new stream during gdk_x11_selection_input_stream_new_async()
then such stream is referenced when passed to the task via
g_task_return_pointer(), so there's no need to reference it again before
returning it, or we'd end up leaking.
Fixes: https://gitlab.gnome.org/GNOME/gtk/-/issues/4892
Without this, there are still GdkMonitors present for displays that are
present but disconnected (such as when a laptop disables the internal
display to connect to an external monitor).
XWayland (at least on gnome-shell) does not support SGI_swap_control,
which we were using to unset the swap interval.
It does support EXT_swap_control though, which is the more modern
version of the same thing, so this commit adds support for that.
And now GDK_DEBUG=no-vsync gives me >1000fps instead of just 60fps,
With XWayland and direct scanout it is possible that some apps get into
a situation where more than 2 buffers are in flight and in that case we
want to be able to still track the change regions for those buffers.
Usually 3 buffers are in use, so we go one higher, just to be safe.
Public headers should mainly include gdktypes.h, which already include
the symbol visibility and versioning macros; we can also modify
gdktypes.h to include the enumerations.
Fix the circular dependency by moving the generated
headers to gdk/version/, and build that directory
first.
Misc other fixes, such as putting the custom targets
as sources, not depedencies, and using the correct
major version in the generator script.
Let's poach the same script used by GLib to avoid having to add all the
version macros by hand every time we increment the GTK version.
This is a work in progress:
- need to rename the GLIB_STATIC_COMPILATION check
- circular dependency: libgtkcss depends on gdkversionmacros.h, but libgdk
depends on libgtkcss
When GDK_DEBUG=no-vsync is on, we might have more than one outstanding
frame. Don't assert when that hapens. Just request a frame callback for
the first and skip the others.
Not all frames get timing info with GDK_DEBUG=no-vsync, so make sure
that even when we render tons of frames, the one frame that does get
timing info is still there when the timing info arrives.
I set it to 128 from 16 now.
This is roughly good enough to go to 5000fps from on a 60Hz monitor.
...when we are using wglChoosePixelFormatARHB(). This ensures that we
hvae a HDC with a pixel format that will really support alpha bits, as
we did for the traditional ChoosePixelFormat().
Thanks to Patrick Zacharias for testing and pointing things out.
We were sending random junk to ChoosePixelFormat().
Also assert that we don't overflow the array. That might be usefu to
know if we carelessly add attributes later.
... for creating the actual WGL contexts, so that we can cut down on the
number of times where we need to create the base, legacy WGL contexts in
order to create the WGL contexts with attributes. We could just use the
dummy context that we have to make it current to create the needed
WGL contexts.
If we are querying the best supported pixel format for our HDC via
wglChoosePixelFormatARB() (i.e. we have the WGL_ARB_pixel_format extension),
it may return a pixel format that is different from the pixel format that we
used for the dummy context that we have setup, in order to, well, run
wglChoosePixelFormatARB(), which sadly requires a WGL context (HGLRC) to be
current in order to use it, which means the dummy HDC already has a pixel
format that has been set (notice that each HDC is only allowed to have its
pixel format to be set *once*). This is notably the case on Intel display
drivers.
Since we are emulating surfaceless GL contexts, we are using the dummy GL
context (and thus dummy HDC that is derived from the notification HWND used in
GdkWin32Display) for doing that, we would get into trouble if th actual HDC
from the GdkWin32Surface has a different pixel format set.
So, as a result, in order to fix this situation, we do the following:
* Create yet another dummy HWND in order to grab the HDC to query for the
capabilities the GL drivers support, and to call wglChoosePixelFormatARB() as
appropriate (or ChoosePixelFormat()) for the final pixel format that we use.
* Ditch the dummy GL context, HDC and HWND after obtaining the pixel format.
* Then set the final pixel format that we obtained onto the HDC that is derived
from the HWND used in GdkWin32Display for notifications, which will become our
new dummy HDC.
* Create a new dummy HGLRC for use with the new dummy HDC to emulate surfaceless
GL support.
We are currently using g_clear_pointer() on the intermediate WGL contexts
(HGLRC)'s that we need to create in the way, which means that we need to ensure
that the correct calling convention for wglDeleteContext() is being applied.
To be absolutely safe about it, use the gdk_win32_private_wglDeleteContext()
calls, which will in turn call wglDeleteContext() directly from opengl32.dll
(using the OpenGL headers from the Windows SDK) instead of going via libepoxy,
which will assure us that the correct calling convention is applied.
Fixes issue #5808.
... and use it in rendernodes.
Setting up textures for diffing is done via gdk_texture_set_diff() which
should only be used during texture construction.
Note that the pointers to next/previous are allowed to dangle if one of
the textures is finalized, but that's fine because we always check both
textures' links to each other before we consider the pointer valid.
The Expose events following a ConfigureNotify may arrive at
a time that we did not resize the surface yet, making these
expose events a no-op. Even though gsk/gtk take care of the
window content itself, this might lead to unrendered portions
of the window shadow.
This may be seen with GSK_RENDERER=cairo and GDK_BACKEND=x11,
attempting to tile a window (e.g. gtk4-demo) left or right.
The window will show black rectangles or other artifacts in
the window shadow areas that correspond to the newly painted
portions (as the window needs to expand vertically).
In order to fix this with a similar behavior to Wayland,
consider ourselves the whole surface invalidated after resize,
in order to ensure everything is painted from scratch.
... when it is available.
Also introduce the new function gdk_rectangle_transform_affine(), which
looks like overkill for this purpose, but I'm about to use it elsewhere.
There's no need for EGL to do any timing, we do it in GTK already.
This fixes hangs in Mesa when we hide a surface after a SwapBuffers()
but before the frame callback arrives.
If we then reshow the surface and immediately render to it, Mesa would
still have a frame callback from before the hiding and forever poll()
waiting for the compositor to send the callback.
Fixes#5761
Add a new function to TextureBuilder that takes a GLsync that
requires internal code to wait on before using the texture.
Somewhat sneakily, we don't take the sync if syncs are not supported by
the current GL context.
As public API has no code to query the sync for the destroy notify, this
is fine and it means we don't have to do the check every time we want to
call gdk_texture_get_sync() internally.
Building GL textures is complicated, so create an object to make them.
So far, this object just contains the functionality of
gdk_gl_texture_new(), but that will change in the future.
In particular, we want to get the GL version, when the Windows box/VM
has an unsuitable GL implementation.
This is somewhat helpful in analyzing failures to bring up GL on
machines where users claim GL does work.
This way, we can realize it and either print success information about
it or return NULL if that fails.
This makes it more likely that we fail early, which means we can then
initialize EGL.
This refactor achieves the following:
* check GL version against proper matching context version
In particular, for legacy contexts, we now actually check
* make sure the actual version is set, even for legacy contexts
* make sure set_is_legacy() is set properly
Now that all contexts do that, insist that they keep doing it.
And because they keep doing it, we can support querying the GL version
from gdk_gl_context_get_version() without requiring the context to be
made current.
The EGL spec states:
The context returned must be the specified version, or a later
version which is backwards compatible with that version.
Even if a later version is returned, the specified version
must correspond to a defined version of the client API.
GTK has so far been relying on EGL implementations returning a
later version, because that is what Mesa does.
But ANGLE does not do that and only provides the minimum version, which
means Windows EGL has been forced to use a lower EGL version for no
reason.
So fix this and try versions in order from highest to lowest.