If we don't clip anything, we stil have bounds - either the framebuffer
size or (more likely) the scissor rect. And we don't want to draw
anything that is outside these bounds.
So clip in those cases, too.
Stops gtk4-demo --run=listbox from trying to render the whole listbox
instead of only the visible parts.
Use G_TYPE_CHECK_INSTANCE_TYPE() instead of just checking for != NULL.
After all, this is a GTypeInstance.
Also fixes some gcc complaints when checking
node == NULL || GSK_IS_RENDERNODE (node)
which gcc was convinced would be always true.
The match operator was added in Python 3.10, which is a bit too new for
some downstreams.
While at it, let's fix the flake8 errors and warnings.
Fixes: #5934
In particular, fix the combination of luminance and alpha. We want to do
mask = luminance * alpha
and for inverted
mask = (1.0 - luminance) * alpha
so add a test that makes sure we do that and then fix the code and
existing tests to conform to it.
When color-matrix modifying a clear surface, the surface would remain
clear according to Cairo.
That's very unfortunate when we prepare a mask for inverted-alpha
masking.
If we build our own targets, we need to include those.
This is only relevant when adding new shaders because meson will
complain that the (unused) sources don't exist as it tries to include
those.
And that will make the build.ninja file not be generated which would
have build those shaders and would have allowed to copy them into the
sources.
Note that this makes builds with glslc not care about all the shader
files being included with the sources, but we have CI to check that.
In particular, catch radius values being < 0 by return_if_fail()ing in
the rendernode creation code, and by erroring out in the rendernode
parser.
I try too much dumb stuff in the node editor.
The IFUNC resolvers that we are using here get
run early, before asan had a chance to set up its
plumbing, and therefore things go badly if they
are compiled with asan. Turning it off makes things
work again.
The gcc bug tracking this problem:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110442
Thanks to Jakub Jelinek and Florian Weimer for
analyzing this and recommending the workaround.
g_hash_table_insert() frees the given key if it already exists
in the hashtable. But since we use the same pointer in the
following line, it will result in use-after-free.
So instead, insert the key only if it doesn't exist.
Make the display handle the cache, because we only need one.
We store the cache in
$CACHE_DIR/gtk-4.0/vulkan-pipeline-cache/$UUID.$VERSION
so we regenerate caches for each different device (different UUID) and
each different driver version.
We also keep track of the etag of the cache file, so if 2 different
applications update the cache, we can detect that.
Vulkan allows merging caches, so the 2nd app reloads the new cache file
and merges it into its cache before saving.
It turns out variable length is only supported for the last binding in
a set, not for every binding.
So we need to create one set for each of our arrays.
[ VUID-VkDescriptorSetLayoutBindingFlagsCreateInfo-pBindingFlags-03004 ] Object 0: handle = 0x33a9f10, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xd3f353a | vkCreateDescriptorSetLayout(): pBindings[0] has VK_DESCRIPTOR_BINDING_VARIABLE_DESCRIPTOR_COUNT_BIT but 0 is the largest value of all the bindings. The Vulkan spec states: If an element of pBindingFlags includes VK_DESCRIPTOR_BINDING_VARIABLE_DESCRIPTOR_COUNT_BIT, then all other elements of VkDescriptorSetLayoutCreateInfo::pBindings must have a smaller value of binding (https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorSetLayoutBindingFlagsCreateInfo-pBindingFlags-03004)
Somebody (me) had flipped the 2 flags in commit ba28971a18:
[ VUID-vkCmdCopyBufferToImage-srcBuffer-00174 ] Object 0: handle = 0x3cfaac0, type = VK_OBJECT_TYPE_COMMAND_BUFFER; Object 1: handle = 0x430000000043, type = VK_OBJECT_TYPE_BUFFER; | MessageID = 0xe1b276a1 | Invalid usage flag for VkBuffer 0x430000000043[] used by vkCmdCopyBufferToImage. In this case, VkBuffer should have VK_BUFFER_USAGE_TRANSFER_SRC_BIT set during creation. The Vulkan spec states: srcBuffer must have been created with VK_BUFFER_USAGE_TRANSFER_SRC_BIT usage flag (https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-vkCmdCopyBufferToImage-srcBuffer-00174)
If a node has a higher depth, pick the RGBA format that has that depth
as the texture format we're renderig to with render_texture().
Support for adapting the swapchain is not part of this.
When a GdkMemoryFormat isn't supported, pick close formats that have a
higher chance of being supported.
Make sure this works recursively and the whole loop always ends up at
R8G8B8A8_UNORM because that one is mandatory.
Roughly, follow these rules:
1. Drop the unpremultiplied
2. Expand channels to include all of RGBA
3. pick swizzle that is RGBA
4. pick next largest depth
5. pick R8G8B8A8_UNORM
This way, we unify the code paths for memory access to textures.
We also technically gain the ability to modify images, though I have no
use case for this.
That way, the offscreen can create images of different types.
Its not used in this commit, but will come in handy when we want to
support high bit depth.
Pretty much copy what GL does and just use the default display to create
GPU-related resources without the need for a display.
This also adds gdk_display_create_vulkan_context() but I've
kept it private because the Vulkan API is generally considered in flux,
in particular with our pending attempts to redo how renderers work.
Fixes a bug introduced in d1135f9e3c.
Luckily the buffer was large enough that all my testing didn't catch it
because it took a few minutes to overflow.
Replace gdk_memory_format_prefers_high_depth with the more generic
gdk_memory_format_get_depth() that returns the depth of the individual
channels.
Also make the GL renderer use that to pick the generic F16 format
instead of immediately going for F32 when uploading textures.
Now that we don't use the old environment variables anymore to force
staging buffer/image uploads, we don't need them.
However, we do autodetect the fast path for avoiding a staging buffer
now, and we might want to be able to turn that off for testing.
So add GSK_DEBUG=staging that does exactly that.
This is unused now that all the code uses map/unmap.
The only thing that map/unmap doesn't do that the old code did, was use
a staging image instead as alternative to a staging buffer for image
uploads.
However, that code is not necessary for anything, so I'm sure we can do
without.
If the memory heap that the GPU uses allows CPU access
(which is the case on basically every integrated GPU, including phones),
we can avoid a staging buffer and write directly into the image memory.
Check for this case and do that automatically.
Unfortunately we need to change the image format we use from
VK_IMAGE_TILING_OPTIMAL to VK_IMAGE_TILING_LINEAR, I haven't found a way
around that yet.
Use the new map/unmap image upload method for Cairo node drawing:
1. map() the memory
2. create an image surface or that memory
3. draw to that image surface
4. success
There's no longer a need for Cairo to allocate image memory.
As an alternative to gsk_vulkan_image_new_from_data() that
takes a given data and creates an image from it, add a 3 step process:
gsk_vulkan_image_new_for_upload()
gsk_vulkan_image_map_memory()
/* put data into memory */
gsk_vulkan_image_unmap_memory()
The benefit of this approach is that it potentially avoids a copy;
instead of creating a buffer to pass and writing the data into it before
then memcpy()ing it into the image, the data can be written straight
into image memory.
So far, only the staging buffer upload is implemented.
There are also no users, those come in the next commit(s).