We write a debug message and then handle things using fallback.
Fixes error messages when trying to import incompatible dmabufs.
(in my case: llvmpipe dmabufs into radv)
The non-shared context's surface must survive the lifetime of the
GL texture, and when the renderer gets unrealized the surface goes away,
but we cannot guarantee that all GL textures have been destroyed by
then.
So better use a context we know will survive becuase it isn't bound to a
surface.
This is the same fix for NGL as f3ac0535f8
was for GL.
Instead of running one renderpass per clip region, run one renderpass for
the whole clip extents, and just set the scissor to the individual clip
rects.
This means that we need to use LOAD_OP_LOAD in cases where we don't
redraw the full extents, but nonetheless, the eprformance wins of
avoiding renderpasses are worth it, in particualr on tilers like the
Raspberry Pi or other mobile chips and the Apple M1/2.
We want to differentiate between CLEAR, DONT_CARE and LOAD in the
future, and the current boolean doesn't allow that.
Also implement support for the the different ops in the Vulkan
renderpass code.
This starts the renderpass at the given scissor rect.
It just splits out the gsk_gpu_render_pass_begin_op() call into a
simpler function, so it's harder to mess up.
Add gsk_gpu_node_processor_set_scissor() that allows resetting the
nodeprocessor's scissor and clip rectangle.
That in turn allows using the same nodeprocessor instance for all the
rects we draw for the clip region.
So they must not copy the fully_opaque flag from the child.
Adapted the testcase that accidentally caught it do now always catch it
by setting a proper background.
When we encounter many dead textures, we want to GC. Textures can take
up fds (potentially even more than 1) and when we are not collecting
them quickly enough, we may run out of fds.
So GC when more then 50 dead textures exist.
An example for this happening is recent Mesa with llvmpipe udmabuf
support, which takes 2 fds per texture and the test in
testsuite/gdk/memorytexture that creates 800 textures of ~1 pixel each,
which it diligently releases but doesn't GC.
Related: #6917
the non-shared context's surface must survive the lifetime of the
GL texture, and when the renderer gets unrealized the surface goes away,
but we cannot guarantee that all GL textures have been destroyed by
then.
So better use a context we know will survive becuase it isn't bound to a
surface.
Use the clamp() API from the previous commit to:
1. Clamp values into range
2. Emit an error if values were out of range
Unlike CSS, which just clamps and doesn't emit an error, we do want to
emit one because we care about colors being correct in our node files.
Make sure the radii are strictly positive.
Also handle the case where start >= end.
We can't really underline that error, because we don't track the
locations of the start/end properties until we know that there's an
error.
So just underline the whole radial gradient declaration.
Test included
When transforming back from a complex transform to a simpler transform,
the resulting clip might turn out to clip everything, because clips can
grow while transforming, but the scissor rect won't. So when this
process happens, we can end up with an empty clip by transforming:
1. Set a clip that also sets the scissor
2. transform in a way that grows the clip, say rotate(45)
3. modify the clip to shrink it
4. transform in a way that simplifies the transform, say another
rotate(45)
5. Figure out that this clip and the scissor rect do no longer overlap
Catch this case and avoid drawing anything.
Add a new GskShadow2 struct that has a GdkColor instead of
a GdkRGBA, and a new constructor and getter to go along with
it.
With this commit, my_color_get_depth is no longer used and
has been dropped.
We know it at begin_frame() time, so if we pass it there instead of
end_frame(), we can use it then to make decisions about opacity.
For example, we could notice that the whole surface is opaque and choose
an RGBx format.
We don't do that yet, but now we could.
This is poking into the surface directly, so not a good idea.
And I want to hide that struct in the priv member.
Technically, this code should look at the opaque region, but I am lazy
and the GL renderer is on its way out, so I think it's not worth doing.
Make sure both GL renderers don't leave their contexts alive via the
current context, but ensure they dispose of them properly.
Fixes issues when the corresponding GL resources in the surfaces they
were attached to go away.
GLContexts marked as surface_attached are always attached to the surface
in make_current().
Other contexts continue to only get attached to their surface between
begin_frame() and end_frame().
All our renderer use surface-attached contexts now.
Public API only gives out non-surface-attached contexts.
The benefit here is that we can now choose whenever we want to
call make_current() because it will not cause a re-make_current() if we
call it outside vs inside the begin/end_frame() region.
Or in other words: I want to call make_current() before begin_frame()
without a performance penalty, and now I can.