This moves a lot of the texture atlas control out of the driver and into
the various texture libraries through their base GskGLTextureLibrary class.
Additionally, this gives more control to libraries on allocating which can
be necessary for some tooling such as future Glyphy integration.
As part of this, the 1x1 pixel initialization is moved to the Glyph library
which is the only place where it is actually needed.
The compact vfunc now is responsible for compaction and it allows for us
to iterate the atlas hashtable a single time instead of twice as we were
doing previously.
The init_atlas vfunc is used to do per-library initialization such as
adding a 1x1 pixel in the Glyph cache used for coloring lines.
The allocate vfunc purely allocates but does no upload. This can be useful
for situations where a library wants to reuse the allocator from the
base class but does not want to actually insert a key/value entry. The
glyph library uses this for it's 1x1 pixel.
In the future, we will also likely want to decouple the rectangle packing
implementation from the atlas structure, or at least move it into a union
so that we do not allocate unused memory for alternate allocators.
This allows the gskglprograms.defs a bit more control over how a shader
will get generated and if it needs to combine sources. Currently, none of
the built-in shaders do that, but upcoming shaders which come from external
libraries will need the ability to inject additional sources in-between
layers.
Don't pass texture + rect, but instead have
gdk_memory_texture_new_subtexture()
and use it to generate subtextures and pass them.
This has the advantage of downloading the a too large texture only once
instead of N times.
It turns out glReadPixels() cannot convert pixels and you are only
allowed to pass a single value into the function arguments. You need to
know which ones or things will explode.
GL is great.
Pass a format do GdkTextureClass::download(). That way we can download
data in any format.
Also replace gdk_texture_download_texture() with
gdk_memory_texture_from_texture() which again takes a format.
The old functionality is still there for code that wants it: Just pass
gdk_texture_get_format (texture) as the format argument.
We're caching two things, either a node itself being rendered, or a
parent storing a cached version of a child as rendered to an offscreen
the size and location of the parent.
If both the parent and child uses the cache this will cause a conflict in
the cache as it is currently use keying of a node pointer which will have
the same value for the node-as-itself and the child-node-of-the-parent.
We fix this by adding another part to the key "pointer_is_child" which means
we can have the same node pointer twice in the cache.
Additionally, in the child-is-rendered-offscreen case the offscreen
result actually depends on the position and size of the parent viewport,
so we need to store the parent bounds in that case.
I found that the gears demo was spending 40% cpu
downloading a GL texture every frame, only to
upload it again to another context.
While the GSK rendering and the GtkGLArea use different
GL contexts, they are (usually) connected by sharing data
with the same global context, so we can just use the
texture without the download/upload dance. This brings
gears down to < 10% cpu.
Do custom uploads rather than using gdk_cairo_surface_upload_to_gl(),
because this way we avoids a roundtrip (memcpy and possibly conversion)
to the cairo image surface format.
We need to include both the scale and the filtering
in the key for the texture cache, since those affect
the texture.
This fixes misrendering in the recorder in the inspector
whenever transforms are involved. An example where this
was showing up is testrevealer's swing transition.
When rendering to an offscreen because of transforms,
check if transforming the bounds of the node results
in a non-axis-aligned quad. If it doesn't, we want
GL_NEAREST interpolation to get sharp edges. Otherwise,
we use GL_LINEAR to get better results for things
that are actually transformed.
Sprinkle various g_assert() around the code where gcc cannot figure out
on its own that a variable is not NULL and too much refactoring would be
needed to make it do that.
Also fix usage of g_assert_nonnull(x) to use g_assert(x) because the
first is not marked as G_GNUC_NORETURN because of course GTester
supports not aborting on aborts.
Apparently genTextures and friends only "reserves names", initializing
them will actually create them. Using glObjectLabel on textures before
initializing them will throw a GL_INVALID_VALUE error.
This fixes rendering to a texture on intel hardware. The glClear calls
would throw a GL_FRAMEBUFFER_INCOMPLETE error here, because the
gsk_gl_driver_begin_frame() call in do_render() reset the framebuffer
object in use.
Put GdkGLTexture into its own file and rename the API to
gdk_gl_texture_foo() instead of gdk_texture_foo_for_gl().
Apart from naming, no actual code changes.
Add a setter for per-renderer debug flags, and use
them where possible. Some places don't have easy access
to a renderer, so this is not complete.
Also, use g_message instead of g_print throughout.