A variation is a #define/specialization constant that every shader can
use to specialize itself as it sees fit.
This commit adds the infrastrcture, future commits will add
implementations.
If we enter the situation where we need to redirect the clipping to an
offscreen, make sure that:
* the ubershader gets only used when beneficial
* we size the offscreen properly and don't let it grow infinitely.
Fixes the clip-intersection-fail-opacity test
There are various places where the alpha is implicitly assumed to be
handled, so just handle it.
As a bonus, this simplifies a bunch of code and makes the texture node
rendering work with alpha.
Use an offscreen and mask it if the clips get too complicated.
Technically, the code could be improved to set the rounded clip on the
offscreen instead of rendering it as a mask, but that would require more
sophisticated tracking of clip regions by respecting the scissor, and
the current clip handling can't do that yet.
This removes one of the last places where the GPU renderer was still
using Cairo fallbacks.
This is for generating descriptors for more than 1 image. The arguments
for this function are very awkward, but I couldn't come up with better
ones and the function isn't that important.
And the calling places still look a lot nicer now.
For now this uses Cairo to generate a mask and then runs a mask op.
This is different from just using fallback in that the child is rendered
with the GPU and not via fallback.
A generic part that can be shared by all gradient shaders that does the
color stop handling and a gradient-specific part that needs to be
implemented individually by each gradient implementation.
If there are more than 7 color stops, we can split the gradient into
multiple gradients with color stops like so:
0, 1, 2, 3, 4, 5, transparent
transparent, 6, 7, 8, 9, 10, transparent
...
transparent, n-2, n-1, n
and use the new BLEND_ADD to draw them on top of each other.
Adapt the testcae that tests this to use colors that work with the fancy
algorithm we use now, so that BLEND_ADD and transitions to transparent
do not cause issues.
Instead of scaled coordinates, use the unscaled ones.
This ensure that gradients get computed correctly as they are not safe
against nonorthogonal transforms - like scales with different scale
factors.
The shader can only deal with up to 7 color stops - but that's good
enough for the real world.
Plus, we have the uber shader.
And if that fails, we can still fall back to Cairo.
The code also doesn't handle repeating linear gradients yet.
This shader can take over from the ubershader. And it can be used
instead of launching the ubershader when no offscreens are necessary.
Also includes an optimization that uses the colorize shader when
appropriate.
The ubershader has some corner cases where it can't be used, in
particular when the child is massively larger than the repeat node and
the repeat node is used to clip lots of the source.
This is using the Vulkan renderer.
It also allows claiming support for all the formats that only Vulkan
supports, but that neither GL nor native mmap can handle.
Add GSK_GPU_IMAGE_RENDERABLE and GSK_GPU_IMAGE_FILTERABLE and make sure
to check formats for this feature.
This requires reorganizing code to actually do this work instead of just
pretending formats are supported.
This fixes GLES upload tests with NGL.
This ensures both that we signal a semaphore for a dmabuf when we export
an image and that we import semaphores for dmabufs and wait on them.
Fixes Vulkan node-editor displaying the Vulkan renderer in the sidebar.
Make gsk_renderer_render_texture() create a dmabuf texture if that is
possible.
If it isn't (ie if we're not on Linux or if dmabufs are otherwise not
working) fall back to the previous code of creating a memory texture.
When using the uber shader a lot, we may overflow the (only 16kB large)
storage buffer.
Stop crashing when that happens and instead just allocate a new one.
This makes the (currently single) storage buffer handled by
GskGpuDescriptors.
A side effect is that we now have support for multiple buffers in place.
We just have to use it.
Mixed into this commit is a complete rework of the pattern writer.
Instead of writing straight into the buffer (complete with repeatedly
backtracking when we have to do offscreens), write into a temporary
buffer and copy into the storage buffer on committing.
The GL branch should eventually call into gdk_gl_context_get_scale(),
which is what checks for GDK_DEBUG=gl-fractional; whereas the Vulkan
branch needs no change.
If we have the choice between running the ubershader or a normal shader
with offscreens, make the choice depend on if the ubershader would
offscreen anyway.
If so, just run the normal shader.
This really gets rid of all ubershader invocations in Adwaita
widget-factory.
Instead of using an enum, use a usual custom class struct like we use
for GskGpuOp.
As a side effect of that refactoring, the display gained a hash table
for textures where we can't use the render data because the texture is
used in multiple renderers.
The goal here is that a texture is always cached and we can ensure that
there is a 1:1 relation between textures and their GskGpuImage. This is
important in particular for external textures - like dmabufs - where we
absolutely don't want 2 images with 2 device memories, and where we use
toggle references to keep them alive.