If a node is non-opaque and has a non-zero opacity we need to paint its
contents and children first to an off screen buffer, and then render the
resulting texture at the desired opacity — otherwise the opacities will
combine and result in the wrong rendering.
We're not going to use separate rendering lists soon, and the way we
render items is less similar to a gaming engine and more similar to a
simpler compositor. This means we don't need to perform a two pass
rendering — opaque items first, transparent items later.
Use appropriate names, and annotate the names with the types — 'u' for
uniforms, 'a' for attributes. The common preambles for shaders are split
from the bodies, so we need some way to distinguish the uniforms and the
attributes just from their name.
We want the GL driver to cache as many resources as possible, so we can
always ensure that we're in a consistent state, and we can handle state
transitions more appropriately.
Drop the texture parameters handling from the texture creation, and
associate them with the contents upload. Also, rename the function to
something more in line with what it does.
We can use the GL_ARB_timer_query extension (available since OpenGL
3.2, and part of the OpenGL specification since version 3.3) to query
the time elapsed when drawing each frame. This allows us to gather
timing information on our use of the GPU.
For the root node we do not need to use blending, as it does not have
any backdrop to blend into. We can use a simpler 'blit' program that
only takes the content of the source and fills the texture quad with
it.
We should use ShaderBuilder to create and store programs for the GL
renderer. This allows us to simplify the creation of programs (by moving
the compilation phase into the ShaderBuilder::create_program() method),
and move towards the ability to create multiple programs and just keep a
reference to the program id.
We should keep the ShaderBuilder around and use it to query the various
uniform and attribute locations when needed, instead of storing those
offsets into the Renderer instance, and copying them. This allows a bit
more flexibility, once we have more than one program built into the
renderer.
The GL renderer should build the GLSL shaders using GskShaderBuilder.
This allows us to separate the common parts into separate files, and
assemble them as necessary, instead of shipping one big shader per type
of GL API (GL3, GL legacy, and GLES).
This commit changes the way GskRenderer and GskRenderNode interact and
are meant to be used.
GskRenderNode should represent a transient tree of rendering nodes,
which are submitted to the GskRenderer at render time; this allows the
renderer to take ownership of the render tree. Once the toolkit and
application code have finished assembling it, the render tree ownership
is transferred to the renderer.
Whenever the render tree changes we want to drop the RenderItem arrays,
as each item contains a pointer to the GskRenderNode which becomes
dangling once the root node changed.
GSK is conceptually split into two scene graphs:
* a simple rendering tree of operations
* a complex set of logical layers
The latter is built on the former, and adds convenience and high level
API for application developers.
The lower layer, though, is what gets transformed into the rendering
pipeline, as it's simple and thus can be transformed into appropriate
rendering commands with minimal state changes.
The lower layer is also suitable for reuse from more complex higher
layers, like the CSS machinery in GTK, without necessarily port those
layers to the GSK high level API.
This lower layer is based on GskRenderNode instances, which represent
the tree of rendering operations; and a GskRenderer instance, which
takes the render nodes and submits them (after potentially reordering
and transforming them to a more appropriate representation) to the
underlying graphic system.