This was the intention, but the object data by itself
does not achieve that: We do run dispose on the display
when it is closed, but object data is only cleared in
finalize. So listen to the ::closed signal and remove
the driver ourselves.
Fix up the drivers dispose implementation enough for
that to actually work.
When the GL texture already has a mipmap, we don't
have to download and reupload it to generate one.
We differentiate the handling for texture scale nodes,
where we do want to force the mipmap creation even if
it requires us to reupload the GL texture, and plain
texture nodes, where we just take advantage of a
preexisting mipmap to allow trilinear filtering for
downscaling, or create one if we have to upload the
texture anyway.
Store texture coordinates for each slice
instead of assuming 0,0,1,1, and generate
overlapping slices to allow for proper mipmaps.
This almost fixes trilinear filtering with
sliced textures.
Instead of uploading a texture once per filter, ensure textures are
uploaded as little as possible and use samplers instead to switch
different filters.
Sometimes we have to reupload a texture unfortunately, when it is an
external one and we want to create mipmaps.
When filtering changes for an already-cached
texture, we need to clear the render data
before setting the new one, otherwise it
does not take and we end up reuploading
the texture every frame.
The API docs outline why quite well.
This should make it possible to do saving of textures to image files
without any private API with the same featureset that GTK uses.
Also remove the gsktextureprivate.h include where
gdk_texture_get_format() was the only reason for it.
This moves a lot of the texture atlas control out of the driver and into
the various texture libraries through their base GskGLTextureLibrary class.
Additionally, this gives more control to libraries on allocating which can
be necessary for some tooling such as future Glyphy integration.
As part of this, the 1x1 pixel initialization is moved to the Glyph library
which is the only place where it is actually needed.
The compact vfunc now is responsible for compaction and it allows for us
to iterate the atlas hashtable a single time instead of twice as we were
doing previously.
The init_atlas vfunc is used to do per-library initialization such as
adding a 1x1 pixel in the Glyph cache used for coloring lines.
The allocate vfunc purely allocates but does no upload. This can be useful
for situations where a library wants to reuse the allocator from the
base class but does not want to actually insert a key/value entry. The
glyph library uses this for it's 1x1 pixel.
In the future, we will also likely want to decouple the rectangle packing
implementation from the atlas structure, or at least move it into a union
so that we do not allocate unused memory for alternate allocators.
This allows the gskglprograms.defs a bit more control over how a shader
will get generated and if it needs to combine sources. Currently, none of
the built-in shaders do that, but upcoming shaders which come from external
libraries will need the ability to inject additional sources in-between
layers.
Don't pass texture + rect, but instead have
gdk_memory_texture_new_subtexture()
and use it to generate subtextures and pass them.
This has the advantage of downloading the a too large texture only once
instead of N times.
It turns out glReadPixels() cannot convert pixels and you are only
allowed to pass a single value into the function arguments. You need to
know which ones or things will explode.
GL is great.
Pass a format do GdkTextureClass::download(). That way we can download
data in any format.
Also replace gdk_texture_download_texture() with
gdk_memory_texture_from_texture() which again takes a format.
The old functionality is still there for code that wants it: Just pass
gdk_texture_get_format (texture) as the format argument.
We're caching two things, either a node itself being rendered, or a
parent storing a cached version of a child as rendered to an offscreen
the size and location of the parent.
If both the parent and child uses the cache this will cause a conflict in
the cache as it is currently use keying of a node pointer which will have
the same value for the node-as-itself and the child-node-of-the-parent.
We fix this by adding another part to the key "pointer_is_child" which means
we can have the same node pointer twice in the cache.
Additionally, in the child-is-rendered-offscreen case the offscreen
result actually depends on the position and size of the parent viewport,
so we need to store the parent bounds in that case.
I found that the gears demo was spending 40% cpu
downloading a GL texture every frame, only to
upload it again to another context.
While the GSK rendering and the GtkGLArea use different
GL contexts, they are (usually) connected by sharing data
with the same global context, so we can just use the
texture without the download/upload dance. This brings
gears down to < 10% cpu.
Do custom uploads rather than using gdk_cairo_surface_upload_to_gl(),
because this way we avoids a roundtrip (memcpy and possibly conversion)
to the cairo image surface format.
We need to include both the scale and the filtering
in the key for the texture cache, since those affect
the texture.
This fixes misrendering in the recorder in the inspector
whenever transforms are involved. An example where this
was showing up is testrevealer's swing transition.
When rendering to an offscreen because of transforms,
check if transforming the bounds of the node results
in a non-axis-aligned quad. If it doesn't, we want
GL_NEAREST interpolation to get sharp edges. Otherwise,
we use GL_LINEAR to get better results for things
that are actually transformed.
Sprinkle various g_assert() around the code where gcc cannot figure out
on its own that a variable is not NULL and too much refactoring would be
needed to make it do that.
Also fix usage of g_assert_nonnull(x) to use g_assert(x) because the
first is not marked as G_GNUC_NORETURN because of course GTester
supports not aborting on aborts.