The relevant question here is about details, because we have to choose
if we declare alpha-only formats as having their (nonexistant) color
channels premultiplied or not, so that the code paths using them can do
the right thing.
Because we are premultiplied by default, it makes sense to treat alpha
like that, because then the alpha-only code doesn't need to do
workarounds for straight alpha.
Where this is relevant of course is when expanding the alpha channel
into color channels, where we want to end up with white.
So make sure we do color = alpha there instead of color = 1 like we did
before.
We need them for mask-only textures.
For tiffs, we convert the formats to RGBA (the idea that tiff can save
everything needs to be buried I guess) as tiffs can't do alpha-only.
Inverted alpha masks have an effect on the source, even if the mask
doesn't cover the source at all - or worse, is completely clipped out.
The GL renderer handles this fine, but Cairo and Vulkan had
optimizations that got this wrong.
In particular, fix the combination of luminance and alpha. We want to do
mask = luminance * alpha
and for inverted
mask = (1.0 - luminance) * alpha
so add a test that makes sure we do that and then fix the code and
existing tests to conform to it.
The result of calling update_property needs
to be that the property is marked as set
afterward, even if the value we pass happens
to match the default value.
After this change, scrollbars have value-now
show up as zero in the accessiblity page of
the inspector, even when that matches the lower
bound.
Test included.
Fixes: #5886
With the current approach, we get duplicate labels
in the accessible name: _Cancel Cancel. Change things
around to always set the labelled-by accessible relation
if we have a label, and not the label accessible property.
When running the tests, only run the random (and potentially large) size
download test once instead of 10 times.
There's no real benefit in doing that, both because it's unlikely to
fail only in the 2nd or 9th run and because the sizes are picked
randomly.
This also speeds up the test massively as the download test was
dominating the runtime.
Instead of picking a few numbers in advance and running them through the
test gauntlet every time, pick the random numbers at runtime.
This both increases the test coverage in that it ultimately tests more
combinations across many runs and it reduces the runtime of individual
runs because every tun only runs the download tests twice (with 1px and
the random size) instead of 5 times.
And that speedup benefits the CI, where the asan runs would cause this
test to timeout sometimes.
Make it use an alpha value that is well defined, ie 0.4 instead of 0.5.
0.4 * 255 = 102
0.5 * 255 = 127.5
This avoids rounding issues where some math may cause the resulting
alpha value to be 127, and some other math ends up with 128.
The idea is that for a rectangle intersection, each corner of the
result is either entirely part of one original rectangle or it is
an intersection point.
By detecting those 2 cases and treating them differently, we can
simplify the code to compare rounded rectangles.
When registering an observer, we send a notification and for that we need
to query the action's state and param type. When setting up a muxer parent,
same thing happens, except the action is queried on the parent instead.
This means that the muxer will notify observers about the parent's actions,
but not about its own.
Add a test to verify it works.
Fixes https://gitlab.gnome.org/GNOME/gtk/-/issues/5861
Add some odd-sized texture sizes to the
download tests, to trigger alignment issues
in the various upload code paths. And add
a size that is bigger than the max-texture-size
we force in one of our test setups.
To compensate, reduce the number of
runs per size from 20 to 10.
The GL renderers like to premultiply content that isn't, and due to the
data loss with alpha == 0 (transparent white, transparent black and
transparent anything are all represented by (0, 0, 0, 0) when
premultiplied) these values cannot be converted back.
There is no longer a need to use gdk_texture_download() and force
conversion to ARGB8 format. We can download the pixels in the original
format again.
That way we avoid testing the conversion code and avoid having to deal
with differences in representable colors.
However, some formats do do conversions, so we allow pixel comparisons
to be accurate (requires 16bit comparison accuracy) or inaccurate (we
only care about 8bit).
Note that for the default RGBA formats, this is identical and means they
need to be bit-exact the same, no matter what.
But the higher bit depth formats may be more different - floating point
can even have different values with high accuracy (the float mantissa is
23 bit, we only care about 16).
When we emit items-changed due to a section
sorter change, don't also emit sections-changed.
Instead make the items-changed signal cover the
whole range.
Tests included.
This one tests a crossfade between two non-overlapping nodes with a clip
region that covers neither of the two nodes.
This tests that renderers can deal with clip regions that doesn't
overlap nodes in a situation where they will most likely want to create
an offscreen.
As offscreens are typically clipped to the clip region, this would cause
an empty offscreen and that can cause failures.
This was an experiment where an offscreen was translated inside an
existing clip.
Because renderers try to limit offscreens to the clip rect, this is
interesting, because they might get the translation wrong.
Using gdk_texture_new_from_resource() is not valid here because we are
not sure if the given resource is valid.
Plus, the previous optimization is no longer relevant, because we are
not using gdk_pixbuf_new_from_resource() anymore - which was what this
optimization was about before it was ported to GdkTexture.
Test attached.
Add some tests for handling of failures.
The test data here is taking from gdk-pixbufs
tests/test-images/fail directory, excluding anything
but png, tiff and jpg images.