The warning gets triggered by rounding errors.
In particular when using fractional scales, the final tile may end up
not accurately matching the computed final value (in the example I was
debugging it was computing 1 vs 1.00000036 for the final tile index,
but that result computed a 0px wide tile size.
And for that tile size we hit that exit condition.
Only initialize the Vulkan or EGL parts where possible.
When dmabufs or dmabuf formats are actually used, we still
initialize fully by creating both a Vulkan and EGL downloader.
This shortens the time to first commit from 149ms to 108ms.
In the colorize and texture ops, print the tex rect. This is useful
because when adding new features with textures (like atlas usage), these
are the ops that I use for testing.
Instead of recreating frames from scratch every time, use an existing one.
This ensures that renderers don't need to recreate GPU resources every
time (like buffers and everything else that frames manage). It also
speeds up occasional render_texture() calls in default renderers.
This speeds up in particular the Vulkan renderer.
This is useful because cleaning up will do the final copies of texture
data.
It also means we use less memory, as we're going to release images that
were used in ops.
If no ops are recorded, then we don't need to wait for any ops to
finish.
Also fix the initial fence creation on Vulkan - we no longer need to
create it fixed because of the random cleanup() call at startup does no
longer happen.
In the rare situation (read: I triggered it with obscure hacks) where no
ops are emitted, we could end up pointing into invalid memory and
crashing.
Don't do that.
Opaque textures don't clamp to transparent but instead to black.
We didn't consider this, so we were blurring their edges into blackness
not into transparency.
Fix this by adding the GSK_GPU_AS_IMAGE_SAMPLED_OUT_OF_BOUNDS flag
and respecting it in the implementation that uses it.
Test included.
Fixes#6980
Swizzling is not respected for blitting.
See commit 058252e895 for the same change in Vulkan.
Apparently that never made it to ngl.
The next commit will have a test for this.
I've had a need for flags for the get_as_image() call but so far have
been able to work around it. But now it seems I might finally need it.
This just introduces the flags but doesn't add any.
Related: #6980
This variant takes the color_states, instead of computing it
anew from the ccs and the color state of the color. This will
be used to pull this work out of the loop in add_glyph_node.
Removed via regex and grep.
The following were intentionally not removed:
- GtkImage:file: (attributes org.gtk.Property.set=gtk_image_set_from_file)
- GtkImage:resource: (attributes org.gtk.Property.set=gtk_image_set_from_resource)
As they have no getter and (setter PROP) without a (getter PROP) crash
gobject-introspection. This is fixed by
ad3118eb51.
The annotations should only be set when the name of the setter or getter
for a property "GtkClassName:prop-name" is not gtk_class_name_g(s)et_property_name.
linear will average all the pixels for the lod, nearest will just pick
one (using the same method as OpenGL/Vulkan, picking bottom right
center).
This doesn't really make linear/nearest filtering work as it should
(because it's still a form of mipmaps), but it has 2 advantages:
1. it gets closer to the desired effect
2. it is a lot faster
Because only 1 pixel is chosen from the original image, instead of
averaging all pixels, a lot less memory needs to be accessed, and
because memory access is the bottleneck for large images, the speedup is
almost linear with the number of pixels not accessed.
And that means that even for lot level 3, aka 1/8th scale, only 1/64 of
the pixels need to be accessed, and everything is 50x faster.
Switching gtk4-demo --run=image_scaling to linear/nearest makes all the
lag go away for me, even with a 64k x 64k image.
Why do we need this? Because RGB images are provided in RGB format but
GPUs can't handle RGB, only RGBA, so we need to convert.
And we need to do that without allocating too much memory, because
allocating memory is slow. Which means in aprticular we need to do the
conversion after mipmapping, not before (like we were doing).
This allows uploading less memory but requires computing lod levels on
the CPU which is slow because it reads through all of the memory and so
far entirely not optimized.
However, it uses significantly less VRAM.
This is done by adding a gdk_memory_mipmap() function that does this
task.
The texture upload op now accepts a lod level and if that is >0 it uses
gdk_memory_mipmap() on the source texture.
Depth of a rendernode should be determined by the textures used and the
compositing colorstate requirements.
Colors influence the colorstate choice, so they indirectly influence the
depth, but they should not influence the depth directly.
Otherwise a single color in a border being rec2100-pq would make us
switch to 16bit float.
Also remove gdk_color_get_depth(), because it was only used here and
because again: Colors should not influence depth decisions.
We want to use the display's context on the resulting texture,
but we do not want to use it for the stufff we need to do while
exporting - most importantly the GLsync.
Fixes#6976
The texture ID is not deleted on dmabuf export; a copy is made, the
GskGpuImage retains ownership.
However when doing GL export, the texture *does* take ownership, so we
need the stealing semantics for that case.
We write a debug message and then handle things using fallback.
Fixes error messages when trying to import incompatible dmabufs.
(in my case: llvmpipe dmabufs into radv)
The non-shared context's surface must survive the lifetime of the
GL texture, and when the renderer gets unrealized the surface goes away,
but we cannot guarantee that all GL textures have been destroyed by
then.
So better use a context we know will survive becuase it isn't bound to a
surface.
This is the same fix for NGL as f3ac0535f8
was for GL.