We align the data to a multiple of vertex stride, that way we use more
memory, but we could compute an offset into the vertex buffer without
changing the offset.
We can set the vertex offset while counting the data, this gets rid of
the need of passing all the counting machinery into the actual data
collection code.
When attempting a complex transform, check if the clip can be ignored
and do that if possible.
That way we don't cause fallbacks when transforming the clip is too
complex.
The idea is that for a rectangle intersection, each corner of the
result is either entirely part of one original rectangle or it is
an intersection point.
By detecting those 2 cases and treating them differently, we can
simplify the code to compare rounded rectangles.
Instead of emitting the render commands once per rectangle of the clip
region, just emit them once with the region's extents.
This is generally faster because it emits fewer commands to the GPU,
even though it may touch significantly more pixels.
For a proper method, we'd need to record the commands per clip rectangle
instead of emitting all of them all the time.
The border and color shaders - the ones that do AA - now multiply their
coordinates by the scale factor, which gives them better rounding
capabilities.
This in particular improves the case where they are used in fractional
scaling situations, where the scale is defined at the root element.
Previously, we just used the defaultscale factor, but now that we're
having it available in push constants, we can read it back for creating
offscreens and rendering fallbacks.
So do that.
It's a 1:1 replacement for GskVulkanPushConstants, just without the
indirection through a different file.
GskVulkanPushConstants as a struct is gone now.
The file still exists to handle the push_constants operation.
1. Use a graphene_vec2_t
2. Ensure it's always positive
3. Don't break with fallback
The scale value is nothing more than an indication of how many pixels to
assume per unit of a node.
We don't want to render the offscreen trnsformed, we want to render it
as-is.
We lose the correct scale factor, but that requires some separate work,
so for now it gets a bit blurry on hidpi.
This introduces the rect object and adds a rect_distance() and
rect_coverage() function.
_distance() returns the signed distance tp the rectangle.
_coverage() returns the coverage of a pixel centered at that position.
Note that the pixel size is computed using dFdx/dFdy.
When the node bounds were a non-integer size, the texture would get
ceil()ed pixels, but various viewport or scissor computations might
floor() instead, leaving the right/bottom row of pixels untouched.
Make sure those functions ceil(), too.
This is not the optimal way of doing it: we're
reuploading the texture with client-side conversion.
But it fits nicely into our current handling of mipmaps.
We can do better once we use shaders for colorspace
conversions.
Make the callers of this function check for
straight alpha themselves, and only do the
version compatibility check here. This makes
the function usable in contexts where straight
alpha is acceptable.
When the command queue is out of batches, there is
no point in doing further work like allocating uniforms.
This helps us avoid assertions in the uniform code
that we would hit when we run out of uniform space
too.
When we start ignoring batches, we must do it everywhere,
or we may run into assertions. This was triggered by an
enormous text node tree produced by tests/rendernode-create.
Vulkan has a different initial coordinate system to GL.
GL:
(-1, 1, -1) +------+.
|`. | `.
| `·--|---·
| : | :
+------+. :
`. : `.:
`·------· (1, -1, 1)
Vulkan:
(-1, -1, 0) +------+.
|`. | `.
| `·--|---·
| : | :
+------+. :
`. : `.:
`·------· (1, 1, 1)
so adjust the near and far plane we pass to
graphene_matrix_init_ortho() to make it end up with the same
projection as the GL renderer.
This was the intention, but the object data by itself
does not achieve that: We do run dispose on the display
when it is closed, but object data is only cleared in
finalize. So listen to the ::closed signal and remove
the driver ourselves.
Fix up the drivers dispose implementation enough for
that to actually work.
Most of the time we want to compute them based on the child node we
render to the offscreen, but not always.
For blend and cross-fade nodes, they need to be computed based on the
node's bounds.
Fixes widget-factory page fade animation weirdly resizing the fading
pages.
We weren't looking in the build dir for generated files.
Actually make sure that we look in the build dir *first*, otherwise
glib-compile-resources will still use the wrong files.
... and use it in rendernodes.
Setting up textures for diffing is done via gdk_texture_set_diff() which
should only be used during texture construction.
Note that the pointers to next/previous are allowed to dangle if one of
the textures is finalized, but that's fine because we always check both
textures' links to each other before we consider the pointer valid.
When slicing the texture, the GL renderer was
forgetting to apply the viewport origin. This
shows up when rendering things with negative
scales, leading to negative origins.
Our coverage computation only works for well-behaved
rects and rounded rects. But our modelview transform
might flip x or y around, causing things to fail.
Add functions to normalize rects and rounded rects,
and use it whenever we transform a rounded rect in GLSL.
Pass the GLsync object from texture into our
command queue, and when executing the queue,
wait on the sync object the first time we
use its associated texture.
When adding mask nodes, I overlooked that
we have two separate functions for determining
what transforms a node supports without offlines.
Since we claim that mask nodes support general
transform, they must certainly support 2d transforms
as well.
They're not needed and GLES doesn't technically support them, even
though GTK had been using them via epoxy sneakily using the
GL_OES_vertex_array_object extension behind our back.
gsk_vulkan_render_download_target() currently resets the uploader
objects before downloading the image that it produces. This is
problematic because there might be unreleased buffers and images
in the command queue.
In particular, this can make validation layers complain about the
glyph atlas - of all things! - upload buffer being released while
still being used by the command queue.
Fix that by resetting the uploader after downloading the image.
The current implementation of the glyph cache deals with atlases by
padding them with 1 pixel at the beginning, at the end, and between
each glyph.
That's cool and all, however, there's a very subtle problem with
this approach: the contents of the atlas are garbage, so this padding
is filled with garbage memory!
Rework the Vulkan glyph cache to draw each and every glyph in a
surface that has 1 pixel border of padding around it. Ensure the
surface is completely black by drawing a rectangle before handing
it to Pango to draw the glyph. Update tx and ty to pick the texture
position adjusted to the 1 pixel padding. The atlas now starts at
position (0, 0), since each glyph individually contains its own padding.
To improve legibility, add a PADDING define and use it everywhere.
Vulkan renders text using VK_BLEND_FACTOR_ONE_MINUS_SRC_ALPHA and
VK_BLEND_FACTOR_SRC_ALPHA, but that implies per-channel alpha
blending, which currently produces the wrong results when blending
glyphs with the images beneath them.
Use the default pipeline constructors, which implies using the
ONE and ONE_MINUS_SRC_ALPHA.
Basically what GL does, but without any debug or feature flag
to gatekeep it, since the Vulkan backend itself is experimental
already.
Ceil surface sizes, and floor coordinates, to the fractional scale
value.
The rects passed to the clip region are in buffer coordinates, and
must not be scaled. Consider the following scenario: Wayland, with
a 1024x768@2 window. That gives us a 2048x1536 raw image. To setup
the Vulkan render pass code, we'd scale 2048x1536 *again*, to an
unreasonable 4196x3072, which is (1) incorrect and (2) really
incorrect and (3) can lead to crashes at best, full GPU resets
at worst - and a GPU reset is incredibly not fun!
Now that we pass the right clip regions at the right coordinates
at all times, remove the extra scaling from the render pass.
This part of the Vulkan renderer is almost exactly equal to the GL
renderer, and the GL renderer already does that since at least
2a38cecd33. Copy that into the Vulkan renderer.
A nice side effect from this commit is that resizing a window now
actually works again.
Sneak in a trivial cleanup by using a variable to hold the draw
index.
This was a tricky one to figure out, but it's pretty simple to
understand (I hope!).
So, this AMD card I'm using requires buffer memory sizes to be
aligned to 16 bytes. Intel is aligned to 4 bytes I think, but
AMD - or at least this AMD model in particular - uses 16 bytes
for alignment.
When creating a a particular texture (I did not determin which one
specifically!) a buffer of size 1276 bytes is requested.
1276 / 16 = 79.75, which is clearly not aligned to the required
16 bytes.
We request Vulkan to create a buffer of 1276 bytes for us, it
figures out that it's not aligned, and creates a buffer of 1280
bytes, which is aligned. The extra 4 bytes are wasted, but that's
okay. We immediately query this buffer for this exact information,
using vkGetBufferMemoryRequirements(), and proceed to create actual
memory to back this buffer up.
The buffer tells us we must use 1280 bytes, so we pass 1280 bytes
and everyone is happy, right? Of course not. We pass 1276 bytes,
and Vulkan is subtly unhappy at us.
Fix that by passing the value that Vulkan asks us to use, i.e.,
the size returned by vkGetBufferMemoryRequirements().
This is what GL does, and for a reason: it can lead to width or
height for very small glyphs. Also, switch to dividing by a float
(1024.0) instead of an integer (1024).
This doesn't make any difference now, but will allow us to copy
subregions more easily. This is not obvious, but here's a quick
explanation:
Leaving 'bufferRowLength' and 'bufferImageHeight' implies that
Vulkan will assume the size passed in the 'imageExtent' field.
Right now, this assumption is correct - the only user of this
function is the glyph cache, and it only copies and uploads
exact rects. Next commits will change that assumption, so we
must pass 'buffer*' fields, and tell Vulkan, "this part of the
buffer represents an image of width x height, and I want the
subregion (x, y, smallerWidth, smallerHeight) of this image".
When creating an image using gsk_vulkan_image_new_for_framebuffer(),
it passes VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL.
However, this is a mistake. The spec demands that the initial
layout must be either VK_IMAGE_LAYOUT_UNDEFINED or
VK_IMAGE_LAYOUT_PREINITIALIZED.
Apparently this was an oversight from commit b97fb75146, since the
commit message even documents that, and all other calls pass either
VK_IMAGE_LAYOUT_UNDEFINED or VK_IMAGE_LAYOUT_PREINITIALIZED.
Create framebuffer images using VK_IMAGE_LAYOUT_UNDEFINED, which is
what was originally expected.
Fractional scaling with the GL renderer is
experimental for now, so we disable it unless
GDK_DEBUG=gl-fractional is set.
This will give us time to work out the kinks.
This commit combines changes in the Wayland backend,
the GL context frontend, and the GL renderer to switch
them all to use the fractional scale.
In the Wayland backend, we now use the fractional scale
to size the EGL window.
In the GL frontend code, we use the fractional scale to
scale the damage region and surface in begin/end_frame.
And in the GL renderer, we replace gdk_surface_get_scale_factor()
with gdk_surface_get_scale().
Instead of tracking a single scale, track x and y scales separately.
Factor out gsk_vulkan_render_pass_new() into a private function that
receives both scales, and pass 'scale_factor' for both.
This is mostly a cosmetic change, and the goal is twofold:
1. Make it easier to spot unimplemented render node types; and
2. Prepare for a small rework
The implementation for each node now lives in specific functions,
like the GL renderer; unlike the GL renderer, however, we use a
node type vtable to map GskRenderNodeType → implementation. Render
node without an implementation map to NULL, and use the fallback
implementation. Render nodes that fail any check and return FALSE
also use fallback implementation.
If we encounter a node or texture the 1st time and they are going
to be used again, give them a name.
Then, when encountering them again, print them by name instead
of duplicating them.
We extend the syntax for nodes from:
<node-type> { ... }
to
<node-type> { ... }
<node-type> <string> { ... }
<string>;
where the first is the same as before, the 2nd defines a named node and
the last references a previously defined node.
Or to give an example:
color "node" {
bounds: 0 0 10 10;
color: red;
}
transform {
bounds: 20 0 10 10;
child: "node";
}
This will draw the red box twice, once at (0,0) and once at
(20,0).
The intended use for this is both shortening generated node files as
well as allowing to write tests that reuse nodes, in particular when
dealing with caches.
We extend the syntax for textures from just:
<url>
to
[<string>] <url>
<string>
where the first defines a named texture while the second references a
texture.
Or to give an example:
texture {
bounds: 0 0 10 10;
texture: "foo" url("foo.png");
}
texture {
bounds: 20 0 10 10;
texture: "foo";
}
This will draw the texture "foo.png" twice, once at (0,0) and once at
(20,0).
The intended use for this is both shortening generated node files as
well as allowing to write tests that reuse textures, in particular when
mixing them in texture and texture-scale nodes.