This commit takes several steps towards rendering text
like we want to.
The creation of the cairo surface and texture is moved
to the backend (in GskVulkanRenderer). We add a mask
shader that is used in the next text pipeline to use
the texture as a mask, like cairo_mask_surface does.
There is a separate color text pipeline that uses the
already existing blend shaders to use the texture as
a source, like cairo_paint does.
The text node api is simplified to have just a single
offset, which determines the left end of the text baseline,
like all our other text drawing APIs.
One cannot use #if...#endif within macro calls in Visual Studio and
possibly other compilers, and there are more uses of VLAs that need to be
replaced with g_newa().
There were also checks for the clip type in gskvulkanrenderpass.c which
were possibly not done right (using the address of the type value to check
for a type value), which triggered errors as one is attempting to compare
a pointer type to an enum/int type.
https://bugzilla.gnome.org/show_bug.cgi?id=773299
It's faster to render once for every rectangle in the clip region than
rendering the outline of the clip region.
Especially because this reduces the time necessary to build up the frame
data.
In widget-factory (where we have 3 rectangles), this leads to a 5x
speedup in the rendering time rendering alone.
Snapshotting time goes from 10ms to ~1ms, which is another huge
improvement.
Note: We interpolate premultiplied colors as per the CSS spec. This i
different from Cairo, which interpolates unpremultiplied.
So in testcases with translucent gradients, it's actually Cairo that is
wrong.
This is now tracking the clips added by the clip nodes.
If any particular node can't deal with a clip, it falls back to Cairo
rendering. But if it can, it will render it directly.
This way we can pass the command pool around.
And that allows us to allocate and submitcustom buffers.
And that is necessary to make staging images work.
Instead of having a setter for the transform, have a GskTransformNode.
Most of the oprations that GTK does do not require a transform, so it
doesn't make sense to have it as a primary attribute.
Also, changing the transform requires updating the uniforms of the GL
renderer, so we're happy if we can avoid that.
Instead of pushing the root matrix, push the world matrix for the
current node. That way, the bounds we emit as vertices are actually
properly transformed.
First, we collect all the info about descriptor sets into a hash table,
then we use its size to determine the amount of sets and allocate those
before we finally go ahead and use the hash table's contents to
initialize the descriptor sets.
And then we're ready to render.
We can let the GPU do its stuff without waiting. The GPU knows what it's
doing.
Which means we now get a lot of time to spend on doing CPU things (read:
we're way better in benchmarks).
The old behavior is safer, so we want to keep it around for debugging.
It can be reenabled with GSK_RENDERING_MODE=sync.
And move the actual rendering code there.
A RenderPass is a collection of operations on the same target that
get executed one after another. It roughly targets VkRenderPass or
rather the subpasses of a VkRenderPass.
For now, only the infrastructure is there. No real stuff is happening.
This is refactoring work.
GskVulkanRender is supposed to be the global object for a render
operation, ie GskVulkanRenderer.render() will create this object for
what it does.
The object will be split into stages that perform the operations
necessary to create a drawing.