This doesn't make any difference now, but will allow us to copy
subregions more easily. This is not obvious, but here's a quick
explanation:
Leaving 'bufferRowLength' and 'bufferImageHeight' implies that
Vulkan will assume the size passed in the 'imageExtent' field.
Right now, this assumption is correct - the only user of this
function is the glyph cache, and it only copies and uploads
exact rects. Next commits will change that assumption, so we
must pass 'buffer*' fields, and tell Vulkan, "this part of the
buffer represents an image of width x height, and I want the
subregion (x, y, smallerWidth, smallerHeight) of this image".
When creating an image using gsk_vulkan_image_new_for_framebuffer(),
it passes VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL.
However, this is a mistake. The spec demands that the initial
layout must be either VK_IMAGE_LAYOUT_UNDEFINED or
VK_IMAGE_LAYOUT_PREINITIALIZED.
Apparently this was an oversight from commit b97fb75146, since the
commit message even documents that, and all other calls pass either
VK_IMAGE_LAYOUT_UNDEFINED or VK_IMAGE_LAYOUT_PREINITIALIZED.
Create framebuffer images using VK_IMAGE_LAYOUT_UNDEFINED, which is
what was originally expected.
Fractional scaling with the GL renderer is
experimental for now, so we disable it unless
GDK_DEBUG=gl-fractional is set.
This will give us time to work out the kinks.
This commit combines changes in the Wayland backend,
the GL context frontend, and the GL renderer to switch
them all to use the fractional scale.
In the Wayland backend, we now use the fractional scale
to size the EGL window.
In the GL frontend code, we use the fractional scale to
scale the damage region and surface in begin/end_frame.
And in the GL renderer, we replace gdk_surface_get_scale_factor()
with gdk_surface_get_scale().
Instead of tracking a single scale, track x and y scales separately.
Factor out gsk_vulkan_render_pass_new() into a private function that
receives both scales, and pass 'scale_factor' for both.
This is mostly a cosmetic change, and the goal is twofold:
1. Make it easier to spot unimplemented render node types; and
2. Prepare for a small rework
The implementation for each node now lives in specific functions,
like the GL renderer; unlike the GL renderer, however, we use a
node type vtable to map GskRenderNodeType → implementation. Render
node without an implementation map to NULL, and use the fallback
implementation. Render nodes that fail any check and return FALSE
also use fallback implementation.
If we encounter a node or texture the 1st time and they are going
to be used again, give them a name.
Then, when encountering them again, print them by name instead
of duplicating them.
We extend the syntax for nodes from:
<node-type> { ... }
to
<node-type> { ... }
<node-type> <string> { ... }
<string>;
where the first is the same as before, the 2nd defines a named node and
the last references a previously defined node.
Or to give an example:
color "node" {
bounds: 0 0 10 10;
color: red;
}
transform {
bounds: 20 0 10 10;
child: "node";
}
This will draw the red box twice, once at (0,0) and once at
(20,0).
The intended use for this is both shortening generated node files as
well as allowing to write tests that reuse nodes, in particular when
dealing with caches.
We extend the syntax for textures from just:
<url>
to
[<string>] <url>
<string>
where the first defines a named texture while the second references a
texture.
Or to give an example:
texture {
bounds: 0 0 10 10;
texture: "foo" url("foo.png");
}
texture {
bounds: 20 0 10 10;
texture: "foo";
}
This will draw the texture "foo.png" twice, once at (0,0) and once at
(20,0).
The intended use for this is both shortening generated node files as
well as allowing to write tests that reuse textures, in particular when
mixing them in texture and texture-scale nodes.
When the GL texture already has a mipmap, we don't
have to download and reupload it to generate one.
We differentiate the handling for texture scale nodes,
where we do want to force the mipmap creation even if
it requires us to reupload the GL texture, and plain
texture nodes, where we just take advantage of a
preexisting mipmap to allow trilinear filtering for
downscaling, or create one if we have to upload the
texture anyway.
Store texture coordinates for each slice
instead of assuming 0,0,1,1, and generate
overlapping slices to allow for proper mipmaps.
This almost fixes trilinear filtering with
sliced textures.
We cheat and just set the texture parameters instead and hope nothing
explodes.
So far it didn't.
This is only needed to support GLES 2.0 so it's quite a limited set of
hardware these days.
Instead of uploading a texture once per filter, ensure textures are
uploaded as little as possible and use samplers instead to switch
different filters.
Sometimes we have to reupload a texture unfortunately, when it is an
external one and we want to create mipmaps.
When filtering changes for an already-cached
texture, we need to clear the render data
before setting the new one, otherwise it
does not take and we end up reuploading
the texture every frame.
Allow to set max texture size using the
GSK_MAX_TEXTURE_SIZE environment variable.
We only allow to lower the max (for obvious
reasons), and we don't allow values smaller
than 512 (since our atlases use that size).