If the given matrix is explicitly of category IDENTITY, we don't need to
do anything, and in the 2D_TRANSLATE case, just offset the child bounds.
Those are the two most common cases.
The code didn't change, it was just shuffled around to make the
with_bounds() versions of the text rendering unnecessary and instead
pass through the generic append_node() path.
They were a neat idea while they lasted. But now, it's time for
categorized transform nodes, where matrices with
GSK_MATRIX_CATEGORY_2D_TRANSLATE are the exact replacement.
Renderers have not been adapted for this purpose, so they (continue to)
run slow paths.
Some of the _diff implementations did a whole bunch of work just to
throw it away afterwards and invalidate the entire union of the two
render nodes, most notably the two clip nodes. Fix this to only call
gsk_render_node_diff_impossible if the previous if-condition is FALSE
and not always.
When the max cost for finding a path gets to high, the diff can now be
aborted.
Because render nodes have a fallback method (by just marking the whole
bounds of the nodes as different), we use this to improve performance
of diffs.
This brings fishbowl (which is basically a container node with N images
that change every frame) back to close to previous performance.
This includes a copy of the diff(1) algorithm used by git diff by Davide
Libenzi.
It's used for the common case ofcontainer nodes having only very few
changes for the few nodes of child widgets that changed (like a button
lighting up when hilighted or a spinning spinner).
... and gsk_render_node_can_diff(). Those are vfuncs to compute a region
containing all the pixels that differ between the two nodes.
This is just the plumbing that chains into node classes. No node
implements it yet.
Adding the offset node broke serialization in 2 ways:
1. We store the enum value in the node, so make sure to not change it
for existing values
2. The offset node was missing in the deserialization lookup table
This is a special case of the transform node that does a 2D translation.
The implementation in the Vulkan and GL renderers is crude and just does
the same as the transform node.
Nothing uses that node yet.
This way, we can postpone the actual rendeing of the node until the
renderer. This allows the renderer to choose the right scale to
render at, so it can decide to use 2x scale for hidpi on its own.
Last but not least, it makes all nodes independent of the context they
are created in, because they do not need to know at snapshot time what
they will ultimately be rendered into.
An alternative GskTextNode constructor that does no text measuring. That
way, we can measure the text before and check if the node will be
outside of the current clip anyway.
Remove all the old 2.x and 3.x version annotations.
GTK+ 4 is a new start, and from the perspective of a
GTK+ 4 developer all these APIs have been around since
the beginning.
This happens when deserializing testcases and it really confuses
valgrind into thinking we're longjmp()ing.
And deserializing rendernodes is slow anyway, so who cares about a few
more malloc()s.
Add a setter for per-renderer debug flags, and use
them where possible. Some places don't have easy access
to a renderer, so this is not complete.
Also, use g_message instead of g_print throughout.
The copy of the PangoGlyphString we do here was showing up
in some profiles. To avoid it, allocate the PangoGlyphInfo array
as part of the node itself. Update all callers to deal with
the slight api change required for this.
Rename the surface getter to peek, following other render
node getters, and make the surface-based constructor private,
since it is not something we want to encourage.
Update all callers.
This commit takes several steps towards rendering text
like we want to.
The creation of the cairo surface and texture is moved
to the backend (in GskVulkanRenderer). We add a mask
shader that is used in the next text pipeline to use
the texture as a mask, like cairo_mask_surface does.
There is a separate color text pipeline that uses the
already existing blend shaders to use the texture as
a source, like cairo_paint does.
The text node api is simplified to have just a single
offset, which determines the left end of the text baseline,
like all our other text drawing APIs.
Currently, this information is not used since cairo_show_glyphs
deals with color glyphs for us. But when we get to uploading
glyphs to a texture atlas, we will need it to do the right thing.
We don't look at individual glyphs here, but just whether the
font has the has-color flag set. In practice, all glyphs in
such a font will be color glyphs, and we can avoid loading all
the glyphs this way.