When we allocate a graphene_point_t on the stack, there's no guarantee
that it will be aligned at an 8-byte boundary, which is an assumption
made by gsk_pathop_encode() (which wants to use the lowest 3 bits to
encode the operation). In the places where it matters, force the
points on the stack and embedded in structs to be nicely aligned.
By using a distinct type for this (a union with a suitable size and
alignment), we ensure that the compiler will warn or error whenever we
can't prove that a particular point is, in fact, suitably aligned.
We can go from a `GskAlignedPoint *` to a `graphene_point_t *`
(which is always valid, because the `GskAlignedPoint` is aligned)
via &aligned_points[0].pt, but we cannot go back the other way
(which is not always valid, because the `graphene_point_t` is not
necessarily aligned nicely) without a cast.
In practice, it seems that a graphene_point_t on x86_64 *is* usually
placed at an 8-byte boundary, but this is not the case on 32-bit
architectures or on s390x.
In many cases we can avoid needing an explicit reference to the more
complicated type by making use of a transparent union. There's already
at least one transparent union in GSK's public API, so it's presumably
portable enough to match GTK's requirements.
Increasing the alignment of GskAlignedPoint also requires adjusting how
a GskStandardContour is allocated and initialized. This data structure
allocates extra memory to hold an array of GskAlignedPoint outside the
bounds of the struct itself, and that array now needs to be aligned
suitably. Previously the array started with at next byte after the
flexible array of gskpathop, but the alignment of a gskpathop is only
4 bytes on 32-bit architectures, so depending on the number of gskpathop
in the trailing flexible array, that pointer might be an unsuitable
location to allocate a GskAlignedPoint.
Resolves: https://gitlab.gnome.org/GNOME/gtk/-/issues/6395
Signed-off-by: Simon McVittie <smcv@debian.org>
Similar to the previous commit, to avoid undefined behaviour we need
to avoid evaluating out-of-bounds shifts, even if their result is going
to ignored by being multiplied by 0 later.
Detected by running a subset of the test suite with
-Dsanitize=address,undefined on x86_64.
Signed-off-by: Simon McVittie <smcv@debian.org>
If, for example, e == 0, it is undefined behaviour to compute an
expression involving an out-of-range shift by (125 - e), even if the
result is in fact irrelevant because it's going to be multiplied by 0.
This was already fixed for the memorytexture test in
commit 5d1b839 "testsuite: Fix another ubsan warning", so use the
implementation from that test everywhere. It's in the header as an
inline function to keep the linking of the relevant tests simple:
its only caller in production code is fp16.c, so there will be no
duplication outside the test suite.
Detected by running a subset of the test suite with
-Dsanitize=address,undefined on x86_64.
Signed-off-by: Simon McVittie <smcv@debian.org>
With the changes in !7473 we now use sampler2D arguments in functions.
However, when there's a function we call with a samplerExternalOES -
which means we need to overload it with that shader variant.
We were using slightly different numbers here, which isn't good.
The matrices in gdkcolordefs.h are tested in the colorstate-internal
tests, so they are at least properly inverse, and the products match.
It would be better to generate the glsl definitions, somehow.
When we the image color state is not a default one, use the cicp
convert op to convert it to the ccs. And when the target color
state is a non-default one, use the shader in the reverse direction.
This shader receives cicp parameters via uniforms, and converts
the texture data from or to the output colorstate. It computes
the matrix in the vertex shader, and then picks the eotf/oetf
according to the cicp parameters in the fragment shader.
We were passing the wrong rect to the clip mode computation, resulting
in a rounded rect every time, even though it should pretty much always
be unclipped.
The visual results are unaffected, because the clip sent to the shader
was still correct.
Instead of allocating one large descriptor pool and hoping we never run
out of descriptors, allocate small ones dynamically, so we know we never
run out.
Test incldued, though the test doesn't fail in CI, because llvmpipe
doesn't care about pool size limits. It does fail on my AMD though.
A fun side note about that test is that the GL renderer handles it best
in normal operationbecause it caches offscreens per node and we draw the
same node repeatedly.
But, the replay test expands them to duplicated unique nodes, and then
the GL renderer runs out of command queue length, so I had to disable
the test on it.
There is now a GskGpuYcbcr struct that maintains all the Vulkan
machinery related to YCbCrConversions.
It's a GskGpuCached, so it will make itself go away when it is no longer
used, ie a video stopped playing.
Now that we don't use the fancy features anymore, we don't need to
enable them.
And that also means we don't need an env var to disable it for testing.
Now that we don't do fancy texture stuff anymore, we don't need fancy
shaders either, so we can just compile against Vulkan 1.0 again.
And that means we need no fallback shaders for Vulkan 1.0 anymore.
Instead of trying to cram all descriptors into one large array and only
binding it at the start, we now keep 1 descriptor set per image+sampler
combo and just rebind it every time we switch textures.
This is the very dumb solution that essentially maps to what GL does,
but the performance impact is negligible compared to the complicated
dance we were attempting before.
Rewrite all shaders to use 2 predefined samplers called GSK_TEXTURE0 and
GSK_TEXTURE1 instead of wrapper functions.
On GL and Vulkan compat mode, these map directly to samplers.
On Vulkan proper, they map to 2 indices into the texture array, like
before.
From now on, the old nvidia GPUs - ie the 3xx drivers - should start
working again.
Fixes: #6564Fixes: #6574Fixes: #6654
This allows GskGpuFrame implementations to store data per vertex
attribute.
This is just the plumbing, no actual implementation is done in this
commit.
This guarantees that the images get ID 0 and 1 (on GL), which is going
to be quite important for the next steps.
Just for funsies, here's fps numbers on my desktop for this change:
NGL 1500 => 1400
Vulkan 2650 => 2250
This by itself is just more work refcounting all those images, but
there's actually a goal here, that will become visible in future
commits.
But this is split out for correctness and benchmarking purposes (the
overhead from refcounting seems to be negligible on my computer).
Just define GSK_N_TEXTURES in every glsl file, extract that #define in
the python parser and emit a static const uint variable
"{shader_name}_n_textures" in the generated header.
It's a struct collecting all relevant info for a texture passed to a
shader.
The ultimate goal is to get rid of the descriptors and let ops
manage them on thir own.
If GskGpuCache has an idea of what time it is, cached items can use that
time to update their last-use time instead of having to carry it around
throught function calls everywhere.
Port an optimization of the GL renderer where it fast-paths crossfades
with progress <= 0 and >=1 - which should really never happen because
nobody should emit them in the first place, but oh well.
We no longer hardcode the few different classes we have, but generically
walk over all classes.
As a side effect we now get new classes added to stats automatically.
The content itself did not change.