The special case for ASCII glyphs is unfortunately not
working very well, because of an oversight in pango:
When I added subpixel positioning, I made pango_shape()
default to not rounding, and make PangoLayout call
pango_shape_with_flags() and pass the rounding information
down. The upshot is that we need to use the _with_flags
variant here and tell it to round position, so it matches
what the text node contains.
Using GtkCssSection in public headers here may be
ok from the C perspective, since it all ends up in
the same library anyway. But it causes circular
dependency problems for our gir files that are still
split by namespace.
To avoid this problem, copy the GtkCssLocation struct
struct as GskParseLocation, and pass take two of them
instead of a GtkCssSection in the error callback.
Update all users.
Fixes: #2454
When encoding big blobs of data in base64, insert newlines.
Base64 allows it, CSS allows it, so not need to make GtkTextView
struggle with multi-megabyte lines.
Update nodeparser tests to reflect this change.
Rename _gtk_css_print_string to strip the _ and add
an insert_newlines argument to it. Update all callers,
and make the render node serializer insert newlines.
A GskGLShader is an abstraction of a GLSL fragment shader that
can produce pixel values given inputs:
* N (currently max 4) textures
* Current arguments for the shader uniform
Uniform types are: float,(u)int,bool,vec234)
There is also a builder for the uniform arguments which are
passed around as immutable GBytes in the built form.
A GskGLShaderNode is a render node that renders a GskGLShader inside a
specified rectangular bounds. It renders its child nodes as textures
and passes those as texture arguments to the shader. You also pass it
a uniform arguments object.
Language bindings—especially ones based on introspection—cannot deal
with custom type hiearchies. Luckily for us, GType has a derivable type
with low overhead: GTypeInstance.
By turning GskRenderNode into a GTypeInstance, and creating derived
types for each class of node, we can provide an introspectable API to
our non-C API consumers, with no functional change to the C API itself.
Use cairo-script-interpreter to parse the scripts that generate cairo
nodes.
This requires libcairoscriptinterpreter.so to work properly, but if
it isn't found we disable this (unimportant for normal functioning)
code and just emits a parser warning.
The testsuite requires it however or it will fail.
A new test is included that tests all of this.
CSS does not do exponents, so printing numbers close to 0 as 1.234e-15
does not work.
Also up the accuracy to 17 digits because that's what everyone else
uses.
Instead of only allowing for glyph indexes, allow ASCII characters as
replacements. So this glyph sequence
glyphs: 65 8, 66 8, 67 8
Can be replaced by
glyphs: "ABC"
provided that the glyph for "A", "B" and "C" are 65, 66 and 67
respectively and their advance is exactly 8.
x offset and y offset must always be 0 and every glyph must start a
cluster.
Update to the docs outlined in #1887.
In particular, the changes do:
1. Require no property, have a working default for everything
2. Be clear about what gets printed and how.
Tests ahve been adapted to still pass.
When printing, behave the same way as when parsing:
Magically skip a container node if there is one - just like the
parser magically creates a container node to hold all the nodes
it parses.
We don't want to return a GFile because GFile can't handle can't deal
with data: urls.
That makes the code a bit more complicated that doesn't deal with those
URLs, but it makes the other code actually work.
GtkCssImageUrl also now decodes data urls immediately instead of only at
the first load. So don't use data urls if you care about performance.
Instead of encoding the raw data, encode the full image to a PNG.
And instead of stuffing that encoding into a string, use a full
data: url.
And then remove the width and height properties, because they're now
implicitly included in the data.
And then change the parser to match.
And because the parser now parses regular urls on top of data: urls, we
can now load any random file.