The new renderers don't support them due to the required complexity of
integrating them with Vulkan and the assumptions those nodes make about
the renderer (the GL renderer exports its internal APIs into the
GLShader).
There haven't been any complaints that I'm aware of since 4.14 was
released where the default renderer does not support the nodes, so usage
in public seems to be close to nonexistant.
The 2 uses I know of were workarounds about missing features in GTK that
have stopped since GTK now supports them:
1. GStreamer used in to do premultiplication when the old GL renderer
did not do so in hardware but on the CPU.
2. Adwaita used it for masking before the mask node wa added in 4.10.
This omission was noticed by Benjamin Otte. Add a premultiply
uniform to the external shader, and add a separate premultiply
shader for the non-external case.
We need to make sure that all our textures have the same memory
format, or we'll run into trouble in the upload code, at least
on GLES, which isn't as forgiving about format mismatches.
Related: #6238
Make sure all our dmabuf debug messages are display-scoped so the
inspector doesn't trigger them, use the same formatting throughout,
and improve consistency of wording here and there.
We just poking at display members here, there is no guarantee that
dmabuf formats have been initialized. So do it explicitly.
This prevents a crash in the inspector when viewing a recorded frame
containing a dmabuf texture, since the inspector uses a separate
display connection.
When we are running under GLES, we can use GL_TEXTURE_EXTERNAL_OES
to support YUV formats.
Since we don't want to deal with the combinatorial explosion of
compiling all our shaders with all combinations of sampler2D vs
samplerExternalOES for all their textures, we copy the external
textures to a regular texture before using them.
Add a GSK_GL_DEFINE_PROGRAM_NO_CLIP, which is like
GSK_GL_DEFINE_PROGRAM but compiles the shader just once,
with NO_CLIP defined.
This will be used in the future for shaders that do
texture conversion.
g_hash_table_insert() frees the given key if it already exists
in the hashtable. But since we use the same pointer in the
following line, it will result in use-after-free.
So instead, insert the key only if it doesn't exist.
This is not the optimal way of doing it: we're
reuploading the texture with client-side conversion.
But it fits nicely into our current handling of mipmaps.
We can do better once we use shaders for colorspace
conversions.
This was the intention, but the object data by itself
does not achieve that: We do run dispose on the display
when it is closed, but object data is only cleared in
finalize. So listen to the ::closed signal and remove
the driver ourselves.
Fix up the drivers dispose implementation enough for
that to actually work.
When the GL texture already has a mipmap, we don't
have to download and reupload it to generate one.
We differentiate the handling for texture scale nodes,
where we do want to force the mipmap creation even if
it requires us to reupload the GL texture, and plain
texture nodes, where we just take advantage of a
preexisting mipmap to allow trilinear filtering for
downscaling, or create one if we have to upload the
texture anyway.
Store texture coordinates for each slice
instead of assuming 0,0,1,1, and generate
overlapping slices to allow for proper mipmaps.
This almost fixes trilinear filtering with
sliced textures.
Instead of uploading a texture once per filter, ensure textures are
uploaded as little as possible and use samplers instead to switch
different filters.
Sometimes we have to reupload a texture unfortunately, when it is an
external one and we want to create mipmaps.
When filtering changes for an already-cached
texture, we need to clear the render data
before setting the new one, otherwise it
does not take and we end up reuploading
the texture every frame.
The API docs outline why quite well.
This should make it possible to do saving of textures to image files
without any private API with the same featureset that GTK uses.
Also remove the gsktextureprivate.h include where
gdk_texture_get_format() was the only reason for it.
This moves a lot of the texture atlas control out of the driver and into
the various texture libraries through their base GskGLTextureLibrary class.
Additionally, this gives more control to libraries on allocating which can
be necessary for some tooling such as future Glyphy integration.
As part of this, the 1x1 pixel initialization is moved to the Glyph library
which is the only place where it is actually needed.
The compact vfunc now is responsible for compaction and it allows for us
to iterate the atlas hashtable a single time instead of twice as we were
doing previously.
The init_atlas vfunc is used to do per-library initialization such as
adding a 1x1 pixel in the Glyph cache used for coloring lines.
The allocate vfunc purely allocates but does no upload. This can be useful
for situations where a library wants to reuse the allocator from the
base class but does not want to actually insert a key/value entry. The
glyph library uses this for it's 1x1 pixel.
In the future, we will also likely want to decouple the rectangle packing
implementation from the atlas structure, or at least move it into a union
so that we do not allocate unused memory for alternate allocators.
This allows the gskglprograms.defs a bit more control over how a shader
will get generated and if it needs to combine sources. Currently, none of
the built-in shaders do that, but upcoming shaders which come from external
libraries will need the ability to inject additional sources in-between
layers.