We don't want to return a GFile because GFile can't handle can't deal
with data: urls.
That makes the code a bit more complicated that doesn't deal with those
URLs, but it makes the other code actually work.
GtkCssImageUrl also now decodes data urls immediately instead of only at
the first load. So don't use data urls if you care about performance.
Instead of encoding the raw data, encode the full image to a PNG.
And instead of stuffing that encoding into a string, use a full
data: url.
And then remove the width and height properties, because they're now
implicitly included in the data.
And then change the parser to match.
And because the parser now parses regular urls on top of data: urls, we
can now load any random file.
Instead of the previous approach using GVariant, this new approach uses
human-readable text files as the serialization format for render nodes.
The format is a custom one, but it is inspired by QML and conforms to
the CSS syntax. Because of that, we can use the CSS machinery from GTK
to parse it, and in particular share code to parse properties that GTK's
CSS machinery also supports, such as colors.
This commit breaks all existing usages of node files - such as the
testsuite and various test tools - they will be fixed in further
commits.
Change the way we compute border color cutoffs to the same method that
browsers use. This method does not consider the corner sizes at all and
only looks at border-width.
Previously, when borders were too big - ie when a 100x100 rect had only
one 100x100 border, like the black part of ◔ - and then shrinking this
rect by 25px on either side, we'd end up with a 50x50 rect with a 75x75
border, and that's obviously not correct.
Floating point values cannot ever be compared for equality. GLib has a
G_APPROX_VALUE macro that lets us compare two value within a provided
precision, so we should use that instead.
Artisanal, homegrown, locally sourced, vegan reference counting has been
replaced by the appropriate API in GLib, which does small things like
saturation and type checking.
Apparently genTextures and friends only "reserves names", initializing
them will actually create them. Using glObjectLabel on textures before
initializing them will throw a GL_INVALID_VALUE error.
When rendering to a texture, collecting the render ops might bind a
different framebuffer, so bind the one we want again before doing the
actual rendering.
This adds debug groups in various places, including the debug
nodes if those are in use. This makes the traces in tools like
renderdoc much easier to read.
GL keeps the unoform state per-program, but not per-frame. So, we can't
pretend that this works for us. Keep the RenderOpBuilder around for the
entire lifetime of the renderer instead.
This fixes rendering to a texture on intel hardware. The glClear calls
would throw a GL_FRAMEBUFFER_INCOMPLETE error here, because the
gsk_gl_driver_begin_frame() call in do_render() reset the framebuffer
object in use.
This fixed the reftest introduced in the previous commit.
I'm using a mesh gradient here instead of drawing 4 individual sides to
avoid artifacts when those sides overlap in rounded corners.
We don't want the new transform while drawing things on a texture.
Instead, only apply the new transform matrix when adding the final
texture drawing ops.
This fixes the stack cube rotation transition to at least look somewhat
better.
When sending render nodes from the client to the daemon we add an id,
and whenever we're about to re-send the entire tree node we instead
send the old id. We track all the nodes for the previous frame
of the surface this way.
Having the id on the daemon side will allow us do to much better deltas.
gsk/gskenums.h:181: Error: Gsk: multiple "@GSK_TRANSFORM_CATEGORY_2D" parameters for identifier "GskTransformCategory":
* @GSK_TRANSFORM_CATEGORY_2D: The matrix is a 2D matrix. This is equivalent
^
gsk/gsktransform.c:1342: Warning: Gsk: gsk_transform_to_2d: unknown parameter 'm' in documentation comment, should be 'self'
gsk/gsktransform.c:1368: Warning: Gsk: gsk_transform_to_2d: invalid return annotation
gsk/gsktransform.c:1461: Warning: Gsk: gsk_transform_to_translate: unknown parameter 'm' in documentation comment, should be 'self'
This reinstates diffing in the same way that it worked for offset nodes.
It would be possible to add diffing for affine transforms or even all
transforms, but I think this is unnecessary right now - and also quite
expensive to compute.
Make the API expect a tranform of the proper category instead of
doing the check ourselves and returning TRUE/FALSE.
The benefit is that the mai use case is switch (transform->category)
statements and in those we know the category and don't need to check
TRUE/FALSE.
Using the wrong matrix will now cause a g_warning().
... instead of computing it every time we need it.
This should be faster and we want to use it a lot more prominently.
Also, we have the struct memory available anyway.
In particular, add a per-category querying API for the matrix:
- gsk_transform_to_translate()
- gsk_transform_to_affine()
- gsk_transform_to_2d()
- gsk_transform_to_matrix()
This way, code can use the relevant one for the given category.
Since we can do partial redraws, dropping every shadow that's been
unused for one frame happens too fast. This is also a problem when a
shadow gets drawn on a texture for a few frames.
This can happen for certain transform nodes. The transform node's
child's bounds are fine, but the transform node bounds are all nan.
Just ignore those bounds since we can't meaningfully render them anyway.
If the given matrix is explicitly of category IDENTITY, we don't need to
do anything, and in the 2D_TRANSLATE case, just offset the child bounds.
Those are the two most common cases.
The code didn't change, it was just shuffled around to make the
with_bounds() versions of the text rendering unnecessary and instead
pass through the generic append_node() path.
They were a neat idea while they lasted. But now, it's time for
categorized transform nodes, where matrices with
GSK_MATRIX_CATEGORY_2D_TRANSLATE are the exact replacement.
Renderers have not been adapted for this purpose, so they (continue to)
run slow paths.
The first set of glyphs is created with a timestamp of 1. Later we
subtract the glyph timestamp from the cache timestamp, meaning we end up
with numbers ending in 9, e.g. 59. Now unfortunately !(60 <= 59), so we
do not end up incrasing the old_pixels count of the cache. Later we then
call lookup() and DEcrease the old_pixels count, which makes the
unsigned int wrap and cause a huge old_pixels value, which causes us to
drop the cache.