When we truncate the command queue because it
is too big, we were messing up our state accounting
and running into criticals as a consequence.
This can be reproduced by opening a well-populated
fishbowl demo in the inspectors recorder.
Fixes: #5188
Add GskMaskNode, and support it in the render node
parser, in the inspector and in GtkSnapshot.
The rendering is just fallback for now.
Based on old work by Timm Bäder.
By dividing the blur radius to obtain the clip radius, we may end up
with halved values that result in an overshunk clip mask. Extend this
so that we ensure to cover the last pixel.
Fixes artifacts seen with the cairo renderer in X11 when resizing
windows horizontally, a black 1px high line would be seen in the
top of the window due to these outset bounds being used in clipping.
More mysteriously, also seems to fix resize lag in the GL renderer
(also X11), if e.g. the bottom-right corner of a window is resized
diagonally in bottom-left -> top-right direction, or
bottom-right -> top-left.
Related: https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/2175#note_1599335
Instead of asserting only in debug builds (which are generally not
shipped in distributions) we should deliver a critical log-level message
so that these can be found sooner when not developing with jhbuild,
Flatpak, etc.
Also assert that we've setup the state correctly when realizing the
GskGLRenderer object.
Fixes#4625
Those property features don't seem to be in use anywhere.
They are redundant since the docs cover the same information
and more. They also created unnecessary translation work.
Closes#4904
Having the initial layout set to VK_IMAGE_LAYOUT_GENERAL causes issues
when going from the final layout to the initial layout since the image
layout is expected to be the general layout. Setting the initial layout
to undefined doesn't have this restriction.
We now collect this information during node
construction, so use it here.
The concrete change here is that we now avoid
offscreens for container nodes with multiple children,
as long as they don't overlap. In particular, this
avoid offscreens for ellipsized dim labels.
This fixes two issues with the offscreen rendering code for nodes with
bounds not aligned with the pixel grid:
1.) When drawing to an offscreen buffer the size of the offscreen buffer
was rounded up, but then later when used as texture the vertices
correspond to the original bounds with the unrounded size. This could
then result in the offscreen texture being drawn onscreen at a slightly
smaller size, which then lead to it being visually shifted and blurry.
This is fixed by adjusting the u/v coordinates to ignore the padding
region in the offscreen texture that got added by the size increase from
rounding.
2.) The viewport used when rendering to the offscreen buffer was not
aligned with the pixel grid for nodes at coordinates not aligned with
the pixel grid. Then because the content of the offscreen buffer is not
aligned with the pixel grid and later when used as textures sampling
from it will result in interpolated values for an onscreen pixel. This
could also result in shifting and blurriness, especially for nested
offscreen rendering at different offsets.
This is fixed by adding similar padding at the beginning of the
texture and also adjusting the u/v coordinates to ignore this region.
Fixes: https://gitlab.gnome.org/GNOME/gtk/-/issues/3833
This moves a lot of the texture atlas control out of the driver and into
the various texture libraries through their base GskGLTextureLibrary class.
Additionally, this gives more control to libraries on allocating which can
be necessary for some tooling such as future Glyphy integration.
As part of this, the 1x1 pixel initialization is moved to the Glyph library
which is the only place where it is actually needed.
The compact vfunc now is responsible for compaction and it allows for us
to iterate the atlas hashtable a single time instead of twice as we were
doing previously.
The init_atlas vfunc is used to do per-library initialization such as
adding a 1x1 pixel in the Glyph cache used for coloring lines.
The allocate vfunc purely allocates but does no upload. This can be useful
for situations where a library wants to reuse the allocator from the
base class but does not want to actually insert a key/value entry. The
glyph library uses this for it's 1x1 pixel.
In the future, we will also likely want to decouple the rectangle packing
implementation from the atlas structure, or at least move it into a union
so that we do not allocate unused memory for alternate allocators.
This removes the sharing of atlases across various texture libraries. Doing
so is necessary so that atlases can have different semantics for how they
allocate within the texture as well as potentially allowing for different
formats of texture data.
For example, in the future we might store non-pixel data in the textures
such as Glyphy or even keep glyphs with color content separate from glyphs
which do not and can use alpha channel only.
This allows the gskglprograms.defs a bit more control over how a shader
will get generated and if it needs to combine sources. Currently, none of
the built-in shaders do that, but upcoming shaders which come from external
libraries will need the ability to inject additional sources in-between
layers.
If the max_entry_size is zero, then assume we can add anything to the
atlas. This allows for situations where we might be uploading an arc list
to the atlas instead of pixel data for GPU font rendering.
When large viewports are passed to gsk_renderer_render_texture(), don't
fail (or even return NULL).
Instead, draw multiple tiles and assemble them into a memory texture.
Tests added to the testsuite for this.
There are situations where our "default framebuffer" is not actually
zero, yet we still want to apply a scissor rect.
Generally, 0 is the default framebuffer. But on platforms where we need
to bind a platform-specific feature to a GL_FRAMEBUFFER, we might have a
default that is not 0. For example, on macOS we bind an IOSurfaceRef to
a GL_TEXTURE_RECTANGLE which then is assigned as the backing store for a
framebuffer. This is different than using gsk_gl_renderer_render_texture()
in that we don't want to incur an extra copy to the destination surface
nor do we even have a way to pass a texture_id into render_texture().
If the rendering operation is over an opaque region, we can potentially
avoid clearing a large section of the framebuffer destination. Some cases
you do want to clear, such as when clearing the whole contents as some
drivers have fast paths for that to avoid bringing data back into the
framebuffer.
Since now we have the shaders working on Windows under GLES with libANGLE using
a 3.0+ context, drop the check to fall back to the Cairo renderer when GLES is
being used.
Various transforms are normalized with their next transform, and if they
end up being the identity transform will return NULL.
For example a translation by (x,y,z) and followed by (-x,-y,-z) will
result in NULL.
Document the return value and more importantly, specify that a call to
`gsk_renderer_realize()` needs to be matched with a call
`gsk_renderer_unrealize()`.
Prevents issues like https://gitlab.gnome.org/GNOME/gtk/-/issues/4625
Instead of just passing major/minor, pass them twice, once for GL and
once for GLES. This way, we don't need to check for GL and GLES
separately.
If something is supported unconditionally, passing 0/0 works fine.
That said, I'd like to group the arguments somehow, because otherwise
it's just a confusing list of numbers - but I have no idea how to do
that.
Limit the diff region to 30 rectangles (randomly chosen because it
looked big enough to not trigger by accident and small enough to not
cause performance issues).
If the diff region gets more complicated, we abort to the parent node
and use its bounds as the diff region instead and then continue diffing
the rest of the node tree.
Fixes: #4560Fixes: #2396
For libANGLE to work with our shaders, we must use "300 es" for
the #version directive in our shaders, as well as using the non-legacy/
non-GLES codepath in the shaders. In order to check whether we are
using the GLSL 300 es shaders, we check whether we are using a GLES 3.0+
context. As a result, make ->glsl_version a const char* and make sure
the existing shader version macros are defined apprpriately, and add a
new macro for the "300 es" shader version string.
This will allow the gtk4 programs to run under Windows using EGL via
libANGLE. Some of the GL demos won't work for now, but at least this
makes things a lot better for using GL-accelerated graphics under Windows
for those that want to or need to use libANGLE (such as those with
graphics drivers that aren't capable of our Desktop (W)GL requirements in
GTK.
On Windows with nVidia drivers at least, when we create a legacy context
via wglCreateContext(), we may still get a (W)GL 4.x context. Allow
such contexts to also use GLSL version 130 instead of 110, so that
things do continue to work.
But don't call it too early, we only want to call it once we have
prepared the target.
This way, we guarantee that a GL context is always available and that it
is bound to the correct target.
Don't pass texture + rect, but instead have
gdk_memory_texture_new_subtexture()
and use it to generate subtextures and pass them.
This has the advantage of downloading the a too large texture only once
instead of N times.
It does not belong in GdkGLContext, it's a renderer thing.
It's also the only user of that API.
Introduce gdk_gl_context_check_version() private API to make version
checks simpler.
It turns out glReadPixels() cannot convert pixels and you are only
allowed to pass a single value into the function arguments. You need to
know which ones or things will explode.
GL is great.
Pass a format do GdkTextureClass::download(). That way we can download
data in any format.
Also replace gdk_texture_download_texture() with
gdk_memory_texture_from_texture() which again takes a format.
The old functionality is still there for code that wants it: Just pass
gdk_texture_get_format (texture) as the format argument.
Make a deep texture, if the render nodes have
high depth content.
For now, we use 32F here for the deep format,
since using 16F causes small rounding errors
that break the memorytexture roundtrip tests.
Look at the framebuffer and the rendernode to
determine what format to use for intermediate
textures.
Our preference here is to use fp16, if we have it
and it makes sense for the framebuffer we're given.
Add private api to find out if the content
of a render node should be considered 'deep'.
The information is collected at creation time,
so there is no tree-walking involved when we
are using this information in the renderer.
Currently, this comes down to whether there are
any texture nodes with high depth textures in the subtree.
In the future, we may want to allow marking gradient
nodes in this way as well.
For MemoryTexture, this is a simple change.
For GLTexture, we need to query the format at texture creation. This
sounds like a bad idea and extra work until one realizes that we'd
need to do that anyway when using the texure the first time - either
when downloading, or when trying to use it in a rendernode, where we
will soon need that information to determine if the texture prefers high
depth.
Also, now make gdk_memory_convert() the only conversion functions
and allow conversions between any 2 formats by going via a float[4].
This could be optimized via fast-paths, but so far it isn't.
We never put large icons into the icon cache,
so all its items are always atlased, but we do
put large glyphs in to the glyph cache, and we
were never freeing those items, even when they
go unused. Fix that.
This reverts commit 87af45403a.
I've found that this change is needed to ensure that the
bounding boxes of text nodes encompass all the glyphd drawing.
Without it, we overdraw the widget boundaries and cut off
glyphs.
We are rendering the glyphs on a larger surface,
and we should avoid introducing unnecessary
rounding errors here. Also, I've found that
we always need to enlarge the surface by one
pixels in each direction to avoid cutting off
the tops of large glyphs.
For 2D transforms, we can read the scale
factors more directly off the matrix.
This should eventually be moved out into a
function to decompose a 2D transform into
scale + rotation + skew + translation.
We avoid an offscreen if we know the child node
can 'handle' the transform. Shadow nodes can if their
child node does - either the child node is a text node
in which case the shortcuts we take for shadow nodes
will work fine with the transform (we just render the
text node offset), or the child is not a text node,
in which case we render the shadow to an offscreen
anyway.
This change makes the label-shadows reftest pass with
the GL renderer, not by fixing the issue but by avoiding
it.
For shadow nodes, we try pretty hard to avoid
rendering shadows, and and we have a shortcut
that just renders text offset, but we can try
harder to do nothing - if the text is offset
by zero, we don't need to draw it at all.
We need to use an offscreen whenever there is overlapping
children somewhere in the tree below, just checking the
direct child of the opacity node is not enough.
Fixes: #4261
The cairo_t that we create to render glyphs for
the glyph cache needs to match the font options
that are supposedly governing how glyphs are
drawn.
Since we allow font options to be different per
widget in gtk, we need to have them at least at
the level of individual render nodes. Adding them
to the lookup key for the glyph cache has the
side effect of solving another problem: We are
not flushing the cache when font options change.
Since font options affect how the glyphs get rendered,
we need to pass the font options down from the gtk level
to where the glyph cache is populated.
Add a new gsk_text_node_new_full api that takes a
cairo_font_options_t in addition to the other parameters.
This is needed as GskRenderNode is its own fundamental type and has its
own GValue infrastructure. And I want to put render nodes into the
clipboard which uses GValues.
Long time ago, Cairo shadows in both GTK3 and 4 were drawn at a size about
twice their radius. Eventually this was fixed but the shadow extents are
still calculated for the previous size and appear unreasonably large: for
example, 141px for a 50px radius shadow. This can get very noticeable in
places such as invisible window frame which gets included into screenshots.
https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/3419 just divides the
radius by 2 when drawing a shadow with Cairo, do the same when calculating
extents.
See https://gitlab.gnome.org/GNOME/gtk/-/issues/3841
harfbuzz has all the information we need, so we
can avoid poking directly at freetype apis. Also
drop the caching of color glyph information until
it turns out to be a problem.
The vfunc is called to initialize GL and it returns a "base" context
that GDK then uses as the context all others are shared with. So the GL
context share tree now looks like:
+ context from init_gl
- context1
- context2
...
So this is a flat tree now, the complexity is gone.
The only caveat is that backends now need to create a GL context when
initializing GL so some refactoring was needed.
Two new functions have been added:
* gdk_display_prepare_gl()
This is public API and can be used to ensure that GL has been
initialized or if not, retrieve an error to display (or debug-print).
* gdk_display_get_gl_context()
This is a private function to retrieve the base context from
init_gl(). It replaces gdk_surface_get_shared_data_context().
Scale factors can be negative, but we were not
looking out for that, triggering an assertion when
trying to create a render target with negative
width of height. Avoid that.
Fixes: #4096
We are pretty good at batching commands now, and we can easily
produce batches that exceed the maximum number of elements per
draw call that the hw can handle. Query that number, and respect
it when merging batches.
This fixes the rendering of the overview map in GtkSourceView.