Instead of creating a pipeline GObject, just ask for the VkPipeline.
And instead of having the Op handle it, just let the renderpass look
up/create the relevant pipeline while creating commands so that it can
insert vkCmdBindPipeline calls as-needed.
This reverts commit 0f184d3270.
The renderer is good enough to make use of the clip region.
Or rather: If it isn't, the renderpass should take care of that, not the
render object.
This reverts most of commit f420c143e0
again because it turns out GPUs like combined images and samplers.
But: The one thing we don't revert is allowing the C code to select any
combination of sampler and image:
gsk_vulkan_render_get_image_descriptor() now takes a 2nd argument
specifying the sampler.
This allows the same flexibility as before, we just combine things
early.
This change was inspired by
https://developer.nvidia.com/blog/vulkan-dos-donts/
Have a resource path => vkShaderModule hash table instead of doing fancy
custom objects.
A benefit is that shader modules are now shared between all renderers
and pipelines.
Instead of creating the op manually, just pass in the renderpass and
have the op created from there.
This way ops aren't really initialized anymore, they are more appended
to the queue, so instead of foo_op_init() we can just call the function
foo_op().
The new code always uses an offscreen, even for children that are
exactly fitting texture nodes.
I would have had to write more code and didn't consider it worth it,
especially because it would have required complicating the
get_as_image() function.
This was the last node using the texture pipeline.
Instead of having one function that gets the image for the texture and
uploads it if it doesn't exist yet, make it 2 functions:
One to get the texture if it exists.
One to assign an uploaded image to the texture.
This way, we can potentially do the upload ourselves.
Allocate the memory up front instead of passing the Op into it.
This way, we can split ops into their own source file and use
init/finish style to use them.
GskVulkanOp is meant to be a proper abstraction of operations
the Vulkan renderer will be doing.
For now it's an atrocious clunky piece of junk wedged into the
renderpass codebase.
It's so temporary that I didn't even adjust indentation of the code.
Intersection with a roudned clip takes too long.
Instead, rename the function to may_intersect() to be clear about what
it does and then just intersect with the regular rectangle.
If we don't clip anything, we stil have bounds - either the framebuffer
size or (more likely) the scissor rect. And we don't want to draw
anything that is outside these bounds.
So clip in those cases, too.
Stops gtk4-demo --run=listbox from trying to render the whole listbox
instead of only the visible parts.
The match operator was added in Python 3.10, which is a bit too new for
some downstreams.
While at it, let's fix the flake8 errors and warnings.
Fixes: #5934
If we build our own targets, we need to include those.
This is only relevant when adding new shaders because meson will
complain that the (unused) sources don't exist as it tries to include
those.
And that will make the build.ninja file not be generated which would
have build those shaders and would have allowed to copy them into the
sources.
Note that this makes builds with glslc not care about all the shader
files being included with the sources, but we have CI to check that.
Make the display handle the cache, because we only need one.
We store the cache in
$CACHE_DIR/gtk-4.0/vulkan-pipeline-cache/$UUID.$VERSION
so we regenerate caches for each different device (different UUID) and
each different driver version.
We also keep track of the etag of the cache file, so if 2 different
applications update the cache, we can detect that.
Vulkan allows merging caches, so the 2nd app reloads the new cache file
and merges it into its cache before saving.
It turns out variable length is only supported for the last binding in
a set, not for every binding.
So we need to create one set for each of our arrays.
[ VUID-VkDescriptorSetLayoutBindingFlagsCreateInfo-pBindingFlags-03004 ] Object 0: handle = 0x33a9f10, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xd3f353a | vkCreateDescriptorSetLayout(): pBindings[0] has VK_DESCRIPTOR_BINDING_VARIABLE_DESCRIPTOR_COUNT_BIT but 0 is the largest value of all the bindings. The Vulkan spec states: If an element of pBindingFlags includes VK_DESCRIPTOR_BINDING_VARIABLE_DESCRIPTOR_COUNT_BIT, then all other elements of VkDescriptorSetLayoutCreateInfo::pBindings must have a smaller value of binding (https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorSetLayoutBindingFlagsCreateInfo-pBindingFlags-03004)
Somebody (me) had flipped the 2 flags in commit ba28971a18:
[ VUID-vkCmdCopyBufferToImage-srcBuffer-00174 ] Object 0: handle = 0x3cfaac0, type = VK_OBJECT_TYPE_COMMAND_BUFFER; Object 1: handle = 0x430000000043, type = VK_OBJECT_TYPE_BUFFER; | MessageID = 0xe1b276a1 | Invalid usage flag for VkBuffer 0x430000000043[] used by vkCmdCopyBufferToImage. In this case, VkBuffer should have VK_BUFFER_USAGE_TRANSFER_SRC_BIT set during creation. The Vulkan spec states: srcBuffer must have been created with VK_BUFFER_USAGE_TRANSFER_SRC_BIT usage flag (https://www.khronos.org/registry/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-vkCmdCopyBufferToImage-srcBuffer-00174)
If a node has a higher depth, pick the RGBA format that has that depth
as the texture format we're renderig to with render_texture().
Support for adapting the swapchain is not part of this.
When a GdkMemoryFormat isn't supported, pick close formats that have a
higher chance of being supported.
Make sure this works recursively and the whole loop always ends up at
R8G8B8A8_UNORM because that one is mandatory.
Roughly, follow these rules:
1. Drop the unpremultiplied
2. Expand channels to include all of RGBA
3. pick swizzle that is RGBA
4. pick next largest depth
5. pick R8G8B8A8_UNORM
This way, we unify the code paths for memory access to textures.
We also technically gain the ability to modify images, though I have no
use case for this.
That way, the offscreen can create images of different types.
Its not used in this commit, but will come in handy when we want to
support high bit depth.
Pretty much copy what GL does and just use the default display to create
GPU-related resources without the need for a display.
This also adds gdk_display_create_vulkan_context() but I've
kept it private because the Vulkan API is generally considered in flux,
in particular with our pending attempts to redo how renderers work.
Fixes a bug introduced in d1135f9e3c.
Luckily the buffer was large enough that all my testing didn't catch it
because it took a few minutes to overflow.
Now that we don't use the old environment variables anymore to force
staging buffer/image uploads, we don't need them.
However, we do autodetect the fast path for avoiding a staging buffer
now, and we might want to be able to turn that off for testing.
So add GSK_DEBUG=staging that does exactly that.
This is unused now that all the code uses map/unmap.
The only thing that map/unmap doesn't do that the old code did, was use
a staging image instead as alternative to a staging buffer for image
uploads.
However, that code is not necessary for anything, so I'm sure we can do
without.
If the memory heap that the GPU uses allows CPU access
(which is the case on basically every integrated GPU, including phones),
we can avoid a staging buffer and write directly into the image memory.
Check for this case and do that automatically.
Unfortunately we need to change the image format we use from
VK_IMAGE_TILING_OPTIMAL to VK_IMAGE_TILING_LINEAR, I haven't found a way
around that yet.
Use the new map/unmap image upload method for Cairo node drawing:
1. map() the memory
2. create an image surface or that memory
3. draw to that image surface
4. success
There's no longer a need for Cairo to allocate image memory.
As an alternative to gsk_vulkan_image_new_from_data() that
takes a given data and creates an image from it, add a 3 step process:
gsk_vulkan_image_new_for_upload()
gsk_vulkan_image_map_memory()
/* put data into memory */
gsk_vulkan_image_unmap_memory()
The benefit of this approach is that it potentially avoids a copy;
instead of creating a buffer to pass and writing the data into it before
then memcpy()ing it into the image, the data can be written straight
into image memory.
So far, only the staging buffer upload is implemented.
There are also no users, those come in the next commit(s).
When nodes are added, nothing was warning us that we need to bump
N_RENDER_NODES.
Make sure that that's no longer necessary by refactoring the code to
remove the define.
This is more expensive, but it finds more cases, and in particular it
catches corner cases like empty nodes or fully clipped nodes that might
otherwise make the kernel throw signals in our direction.
If one of the descriptor sets doesn't have any items, don't include it
in the sets passed to vkUpdateDescriptorSets().
This has no effect right now, because we either have both images and
samplers or neither, but it will become relevant once we also support
buffers.
Sometimes the GPU is still busy when the next frame starts (like when
no-vsync benchmarking), so we need to keep all those resources alone and
create new ones.
That's what the render object is for, so we just create another one.
However, when we create too many, we'll starve the CPU. So we'll limit
it. Currently, that limit is at 4, but I've never reached it (I've also
not starved the GPU yet), so that number may want to be set lower/higher
in the future.
Note that this is different from the number of outstanding buffers, as
those are not busy on the GPU but on the compositor, and as such a
buffer may have not finished rendering but have been returend from the
compositor (very busy GPU) or have finished rendering but not been
returned from the compositor (very idle GPU).
The idea here is that we can do more complex combinations and use that
to support texture-scale nodes or use fancy texture formats (suc as
YUV).
I'm not sure this is actually necessary, but for now it gives more
flexibility.
For blend and crossfade nodes, one of the children may exist and
influence the rendering, while the other does not.
Previously, we would skip the node, which would cause the required
rendering to not happen. We now send a valid texture id for the
invalid offscreen, thereby actually rendering the required parts.
Fixes the blend-invisible-child compare test
Current state for compare tests:
Ok: 397
Expected Fail: 0
Fail: 26
Unexpected Pass: 0
Skipped: 2
Timeout: 0
Instead of having a descriptor set per operation, we just have one
descriptor set and bind all our images into it.
Then the shaders get to use an index into the large texture array
instead.
Getting this to work - because it's a Vulkan extension that needs to be
manually enabled, even though it's officially part of Vulkan 1.2 - is
insane.
If we have a rectangular clip without transforms, we can use
scissoring. This works particularly well because it allows intersecting
rounded rectangles with regular rectangles in all cases:
Use the scissor rect for the rectangle and the normal clipping code for
the rounded rectangle.
The idea is to use it for clip nodes when they are integer-aligned.
To do that, we need to track the scissor rect in the parse state, so we
do that, too.
Also move the viewport offset out of the projection matrix, as it is
part of the transform between clip and scissor, so it needs to live in
the offset.
We align the data to a multiple of vertex stride, that way we use more
memory, but we could compute an offset into the vertex buffer without
changing the offset.
We can set the vertex offset while counting the data, this gets rid of
the need of passing all the counting machinery into the actual data
collection code.
When attempting a complex transform, check if the clip can be ignored
and do that if possible.
That way we don't cause fallbacks when transforming the clip is too
complex.
Instead of emitting the render commands once per rectangle of the clip
region, just emit them once with the region's extents.
This is generally faster because it emits fewer commands to the GPU,
even though it may touch significantly more pixels.
For a proper method, we'd need to record the commands per clip rectangle
instead of emitting all of them all the time.
The border and color shaders - the ones that do AA - now multiply their
coordinates by the scale factor, which gives them better rounding
capabilities.
This in particular improves the case where they are used in fractional
scaling situations, where the scale is defined at the root element.
Previously, we just used the defaultscale factor, but now that we're
having it available in push constants, we can read it back for creating
offscreens and rendering fallbacks.
So do that.
It's a 1:1 replacement for GskVulkanPushConstants, just without the
indirection through a different file.
GskVulkanPushConstants as a struct is gone now.
The file still exists to handle the push_constants operation.
1. Use a graphene_vec2_t
2. Ensure it's always positive
3. Don't break with fallback
The scale value is nothing more than an indication of how many pixels to
assume per unit of a node.
We don't want to render the offscreen trnsformed, we want to render it
as-is.
We lose the correct scale factor, but that requires some separate work,
so for now it gets a bit blurry on hidpi.
This introduces the rect object and adds a rect_distance() and
rect_coverage() function.
_distance() returns the signed distance tp the rectangle.
_coverage() returns the coverage of a pixel centered at that position.
Note that the pixel size is computed using dFdx/dFdy.
When the node bounds were a non-integer size, the texture would get
ceil()ed pixels, but various viewport or scissor computations might
floor() instead, leaving the right/bottom row of pixels untouched.
Make sure those functions ceil(), too.
Vulkan has a different initial coordinate system to GL.
GL:
(-1, 1, -1) +------+.
|`. | `.
| `·--|---·
| : | :
+------+. :
`. : `.:
`·------· (1, -1, 1)
Vulkan:
(-1, -1, 0) +------+.
|`. | `.
| `·--|---·
| : | :
+------+. :
`. : `.:
`·------· (1, 1, 1)
so adjust the near and far plane we pass to
graphene_matrix_init_ortho() to make it end up with the same
projection as the GL renderer.
Most of the time we want to compute them based on the child node we
render to the offscreen, but not always.
For blend and cross-fade nodes, they need to be computed based on the
node's bounds.
Fixes widget-factory page fade animation weirdly resizing the fading
pages.
gsk_vulkan_render_download_target() currently resets the uploader
objects before downloading the image that it produces. This is
problematic because there might be unreleased buffers and images
in the command queue.
In particular, this can make validation layers complain about the
glyph atlas - of all things! - upload buffer being released while
still being used by the command queue.
Fix that by resetting the uploader after downloading the image.
The current implementation of the glyph cache deals with atlases by
padding them with 1 pixel at the beginning, at the end, and between
each glyph.
That's cool and all, however, there's a very subtle problem with
this approach: the contents of the atlas are garbage, so this padding
is filled with garbage memory!
Rework the Vulkan glyph cache to draw each and every glyph in a
surface that has 1 pixel border of padding around it. Ensure the
surface is completely black by drawing a rectangle before handing
it to Pango to draw the glyph. Update tx and ty to pick the texture
position adjusted to the 1 pixel padding. The atlas now starts at
position (0, 0), since each glyph individually contains its own padding.
To improve legibility, add a PADDING define and use it everywhere.
Vulkan renders text using VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA and
VK_BLEND_FACTOR_SRC_ALPHA, but that implies per-channel alpha
blending, which currently produces the wrong results when blending
glyphs with the images beneath them.
Use the default pipeline constructors, which implies using the
ONE and ONE_MINUS_SRC_ALPHA.
Basically what GL does, but without any debug or feature flag
to gatekeep it, since the Vulkan backend itself is experimental
already.
Ceil surface sizes, and floor coordinates, to the fractional scale
value.