We can just check if the subsurfaces contain content - and if they do,
they will be offloading and we can ignore the diff.
This essentially reverts 48740de71a
Instead of relying on diffing subsurface nodes, we track damage
generated by offloaded contents inside GskOffload.
There are 3 stages a subsurface node can be in:
1. not offloaded
Drawing is done by the renderer
2. offloaded above
The renderer draws nothing
3. offloaded below
The renderer needs to punch a hole.
Whenever the stage changes, we need to repaint.
And that can happen without the subsurface's contents changing, like
when a widget is put above the subsurface and it needs to to go from
offloaded above to below.
So we now recruit GskOffload for tracking these changes, instead of
relying on the subsurface diffing.
But we still need the subsurface diffing code to work for the
non-offloaded case, because then the offloading code is not used.
So we keep using it whenever that happens.
Not that when a subsurface transitions between being offloaded and not
being offloaded, we may diff it twice - once in the offload code and
once in the node diffing - but that shouldn't matter.
When a subsurface goes from not offloaded to offloaded (or vice versa),
we need to add the whole node to the diff region, because we switch from
whatever contents were drawn to a punched hole.
The 2 callers of gsk_gpu_get_node_as_image() were already computing the
minimum clip region and in particular aligning it to the pixel grid, so
intersecting with node bounds again was causing that alignment to be
busted.
When using a window size and scale that don't multiply to an integer, we
were using the wrong method to adjust it.
The Wayland fractional scaling spec just says:
> For toplevel surfaces, the size is rounded halfway away from zero.
This is meant to be interpreted as "create a large enough buffer to hold
partial pixels) and the compositor will blend it mapping to the pixel
grid" even if that means the buffer slightly overhangs.
Example:
A 11 units wide window at 150% will need a 11 * 1.5 = 16.5 pixel wide
buffer. This should be rounded to 17 pixels but rendered as if only 16.5
pixels are occupied by the window, not as if all 17 pixels are occupied.
This commit is wrong.
It does achieve what it sets out to do, but the method doesn't work.
It confused multiple things in one commit, the commit message only
describes the symptoms it tries to fix and not why the fix is correct,
it includes no tests and it wsn't properly reviewed.
Related: !6871
We want to use a viewport that gives us the right scale back.
This fixes problems where glyph lookups were inefficient because
the scale part of the key would fluctuate ever so slightly.
We were not finding an overlaid spinner, since it is implemented
as a texture in a (most of the time) non-affine transform, and
we were aborting our treewalk when we see such transforms.
Instead, don't abort the walk on any transforms, but check if we
are in an affine context before deciding to offload a subsurface.
The previous code was ignoring non-scissor clips, which would make it
overeager at punching holes.
It also was not working with fractional coordinates.
Fixes#6375
We can reach the code that removes the item from the hash table
before or after the weak unref has triggered. Just leave the
weakref in place and let it do its thing, if it hasn't gone
off yet. That matches what we do in free.
Fixes: #6377
We do gc in a timeout, when an arbitrary GL context might be
current, so we need to make sure its ours and we don't free
random textures in another context.
Fixes: #6366
Count dead pixels in textures (ie the number of pixels in GPU
textures that are no longer backed by an alive GdkTexture object),
and when the there's too many, do a gc before rendering the next
frame.
Count the uses of cached texture - from the device (via the linked
list) and from the texture (via render data / weak ref), and only
free the item once the use count reaches zero.
Instead of forever running a timeout to do gc, ensure the timeout
is scheduled whenever we render a frame (this is done by calling
gsk_gpu_device_maybe_gc () before gsk_gpu_frame_render (), and
gsk_gpu_device_queue_gc () after).
Read the GSK_CACHE_TIMEOUT environment variable to override the
default 15s timeout for cache gc. This is mainly meant for debugging.
Since we don't really need two knobs, reuse the gc timeout value
for the max age of items too.
The node processing wasn't skipping 0-size nodes when using the
uber shader, leading to assertions down the road. Since the ngl
renderer doesn't use uber shaders, this only affects vulkan.
Test included.
Fixes: #6370
When we don't have an embedded font file via a url, then we want
to parse fonts "as normal", i.e. allow fallback for aliases like
"Monospace 10". This was broken when the url support was added.
Make it work again.
Update affected tests. In particular, the output of the text-fail
test goes back to be the same it was before the url changes.
GL needs version 4.2 before it supports explicit bindings. We use GLES
usually, and Mesa supports GL 4.6, so we didn't hit this case before.
However, MacOS does use GL and Mac OS is stuck on GL 4.1.
Fixes#6363
The intent of this change to get wider testing and verify that the
new renderers are production-ready. If significant problems show
up, we will revert this change for 4.14.
The new preference order is ngl > gl > vulkan > cairo.
The gl renderer is still there because we need it to support gles2
systems, and vulkan still has some rough edges in application support
(no gl area support, webkit only works with gl).
If you need to override the default renderer choice, you can
still use the GSK_RENDERER environment variable.