When we encounter many dead textures, we want to GC. Textures can take
up fds (potentially even more than 1) and when we are not collecting
them quickly enough, we may run out of fds.
So GC when more then 50 dead textures exist.
An example for this happening is recent Mesa with llvmpipe udmabuf
support, which takes 2 fds per texture and the test in
testsuite/gdk/memorytexture that creates 800 textures of ~1 pixel each,
which it diligently releases but doesn't GC.
Related: #6917
gsk_gpu_device_gc() may release the last ref on the GskGpuDevice,
leading to memory corruption when setting priv->cache_gc_source = 0.
Includes a bit of refactoring, so the ref/unref wraps nicely around the
actual code.
Fixes crashes seen after using the inspector and closing the window,
thereby closing all windows of a display and releasing all references to
the device.
Fixes#6861
If desired, try creating GL_SRGB images. Pass a try_srgb boolean down to
the image creation functions and have them attempt to create images like
that.
When it is not possible to create srgb images in the given format, just
fall back to regular images. The calling code is meant to check the
GSK_GPU_IMAGE_SRGB flags to determine the actual format of the resulting
image.
I was watching the log in my terminal and nothing happened.
And I wasn't sure if that was because nothing was printed or because the
same thing was printed every few seconds.
Fix that by printing a timestamp, so that in a few seconds something
else will be printed.
This is for 3 reasons:
1. Separation of concerns
The device is meant to manage the Vulkan/GL device and check stuff
like image sizes.
Caching is not part of that.
2. Refcounting
Images etc want to reference the device, but the cache wants to
reference images. If the cache is the device, that's a refcycle.
3. Flexibility
It's now easier to implement >1 cache, say one per depth or one per
color state.
On Windows, gsize is a long long unsigned. The compiler complains about
that.
Use G_GSIZE_FORMAT which translates to %llu on Windows, %lu on most
platforms, and sometimes just %u on rare cases.
It turns out that we mispositioned glyphs with some cff fonts
when metrics hinting is off, and hinting is on. Since we don't
fully understand the interactions of these settings at this point,
lets preserve metrics hinting as it was on the font we got.
This at least gives folks a workaround for when they experience
clipped rendering with cff fonts: Turn on hint-metrics.
We forced hint metrics off here because it made Pango do some
creative wfh for hex boxes at small sizes, but I've dropped that
on the Pango side.
Make a single gsk_reload_font helper that can tweak both
scale and font options, so we can ensure that our scaled
font has hint-metrics turned off (pango pays attention to
hint metrics when sizing and rendering hex boxes, and that
hurts us.
This changes the approach we take to rendering glyphs in the
presence of a scale transform: Instead of scaling the extents
and rendering to an image surface with device scale, simply
create a scaled font and use it for extents and rendering.
This avoids clipping problems with scaling of extents in
the presence of hinting.
We can reach the code that removes the item from the hash table
before or after the weak unref has triggered. Just leave the
weakref in place and let it do its thing, if it hasn't gone
off yet. That matches what we do in free.
Fixes: #6377
We do gc in a timeout, when an arbitrary GL context might be
current, so we need to make sure its ours and we don't free
random textures in another context.
Fixes: #6366
Count dead pixels in textures (ie the number of pixels in GPU
textures that are no longer backed by an alive GdkTexture object),
and when the there's too many, do a gc before rendering the next
frame.
Count the uses of cached texture - from the device (via the linked
list) and from the texture (via render data / weak ref), and only
free the item once the use count reaches zero.
Instead of forever running a timeout to do gc, ensure the timeout
is scheduled whenever we render a frame (this is done by calling
gsk_gpu_device_maybe_gc () before gsk_gpu_frame_render (), and
gsk_gpu_device_queue_gc () after).
Read the GSK_CACHE_TIMEOUT environment variable to override the
default 15s timeout for cache gc. This is mainly meant for debugging.
Since we don't really need two knobs, reuse the gc timeout value
for the max age of items too.
Count how many dead pixels we have, and free the atlas if more than
half of its pixels are dead.
As part of this, change when glyphs are freed. We now keep them
in the hash table until their atlas is freed and we only do dead
pixel accounting when should_collect is called. This keeps the
glyphs available for use from the cache as long as are in the atlas.
If a stale glyph is sused, we 'revive' it by removing its pixels
from the dead.
This matches more closely what the gl renderer does.
If we gc a cached texture for which the GdkTexture is still alive,
the cached texture object will remain accessible via the render
data, so need to make sure not to leave a dangling pointer behind
here.
This is straightforward. If a texture hasn't been used for 4 seconds,
we consider it stale, and drop it the next time gc comes around.
The choice of 4 seconds is arbitrary.
Fixes: #6346
Previously, we only checked if the cache had exhausted the maximum
number of slices.
But we also need to check that the height of the slices doesn't exceed
the height of the texture.
Instead of using an enum, use a usual custom class struct like we use
for GskGpuOp.
As a side effect of that refactoring, the display gained a hash table
for textures where we can't use the render data because the texture is
used in multiple renderers.
The goal here is that a texture is always cached and we can ensure that
there is a 1:1 relation between textures and their GskGpuImage. This is
important in particular for external textures - like dmabufs - where we
absolutely don't want 2 images with 2 device memories, and where we use
toggle references to keep them alive.