This is the first example of indirect rendering involving
a box gadget. For now, we iterate the child gadgets manually,
and rely on gtk_container_propagate_render_node for the
child widgets. Eventually, we may want a better solution
here.
...and implement it for GtkCssGadget and GtkCssCustomGadget.
This allows us to decide on a per-object basis if a custom
gadget needs a render node for content or not.
The custom gadget draw function has the side effect of informing
the gadget machinery wether to draw focus or not. Bring the
draw function back, just for its boolean return value. We may
want to find a better solution for this.
I don't think this should stay in the code long-term, but it
is useful for debugging. It helped me track down some suspicious
placements of render nodes.
Give all nodes the same detail about the owner widget.
This reveals that every GtkCssCustomGadget gets a
DrawGadgetContents node, even if their draw_func is NULL.
We may want to come up with a better solution for that.
When creating the GskRenderNodes for the gadgets we should not translate
the coordinates inside the Cairo context, but we should tweak the
coordinates of the anchor point.
This is still not enough to get an appropriate rendering, as the result
is still slightly offset to the left.
Instead of passing the size of the buffer, we should pass the number of
quads; we know what the size of a single quad structure is, so we can do
the multiplication internally when creating the VAO.
This allows us to print the quads for debugging purposes.
GtkWidget.create_render_node() sets up a GskRenderNode appropriate for
rendering the contents of a widget, including its bounds,
transformation, and anchor point.
The naming is consistent with other scene graph libraries, as it
represents an additional translation transformation applied on top of
the provided transformation matrices.
We can also simplify the implementation by applying the translation when
we compute the world matrix.
We keep the textures used inside a frame around until the end of the
following frame; whenever we need a texture with the same size, and
it's not marked in use, then we just reuse the existing texture.
We were allocating a surface thats big enough for the clip, and
we were setting the transform for that, but then GtkContainer
was overriding the transform with the one for the allocation.
Also, we were drawing at the clip position, not the allocation
position.
This was overwhelming other useful debug output, so make it
opt-in. We print the render items for both opengl and transforms,
since the matrices bleed into each other, otherwise.
Since we use an FBO to render the contents of the render node tree, the
coordinate space is going to be flipped in GL. We can undo the flip by
using an appropriate projection matrix, instead of changing the sampling
coordinates in the shaders and updating all our coordinates at render
time.
We need to apply a scaling factor whenever we deal with user-supplied
coordinates, like:
- when creating textures
- when setting up the viewport
- when submitting the scene
Render nodes need access to rendering information like scaling factors.
If we keep render nodes separate from renderers until we submit a nodes
tree for rendering we're going to have to duplicate all that information
in a way that makes the API more complicated and fuzzier on its
semantics.
By having GskRenderer create GskRenderNode instances we can tie nodes
and renderers together; since higher layers will also have access to
the renderer instance, this does not add any burden to callers.
Additionally, if memory measurements indicate that we are spending too
much time in the allocation of new render nodes, we can now easily
implement a free-list or a renderer-specific allocator without breaking
the API.
We cannot implement GtkWidgetClass.get_render_node() in GtkContainer
without breaking the fallback path that renders a widget to a single
render node rasterization. For GtkContainer subclasses we should provide
a simple API, similar to gtk_container_propagate_draw(), that gathers
all the render nodes for each child.
If a node is non-opaque and has a non-zero opacity we need to paint its
contents and children first to an off screen buffer, and then render the
resulting texture at the desired opacity — otherwise the opacities will
combine and result in the wrong rendering.
We're not going to use separate rendering lists soon, and the way we
render items is less similar to a gaming engine and more similar to a
simpler compositor. This means we don't need to perform a two pass
rendering — opaque items first, transparent items later.
Use appropriate names, and annotate the names with the types — 'u' for
uniforms, 'a' for attributes. The common preambles for shaders are split
from the bodies, so we need some way to distinguish the uniforms and the
attributes just from their name.
We want the GL driver to cache as many resources as possible, so we can
always ensure that we're in a consistent state, and we can handle state
transitions more appropriately.
Drop the texture parameters handling from the texture creation, and
associate them with the contents upload. Also, rename the function to
something more in line with what it does.
We want to add the list of FBOs tied to a texture; this means we cannot
trivally copy the Texture structure when adding it to a GArray. We're
also going to have more textures than VAOs, so it makes more sense to
use a O(1) access data structure for them.