Our coverage computation only works for well-behaved
rects and rounded rects. But our modelview transform
might flip x or y around, causing things to fail.
Add functions to normalize rects and rounded rects,
and use it whenever we transform a rounded rect in GLSL.
Pass the GLsync object from texture into our
command queue, and when executing the queue,
wait on the sync object the first time we
use its associated texture.
When adding mask nodes, I overlooked that
we have two separate functions for determining
what transforms a node supports without offlines.
Since we claim that mask nodes support general
transform, they must certainly support 2d transforms
as well.
They're not needed and GLES doesn't technically support them, even
though GTK had been using them via epoxy sneakily using the
GL_OES_vertex_array_object extension behind our back.
Fractional scaling with the GL renderer is
experimental for now, so we disable it unless
GDK_DEBUG=gl-fractional is set.
This will give us time to work out the kinks.
This commit combines changes in the Wayland backend,
the GL context frontend, and the GL renderer to switch
them all to use the fractional scale.
In the Wayland backend, we now use the fractional scale
to size the EGL window.
In the GL frontend code, we use the fractional scale to
scale the damage region and surface in begin/end_frame.
And in the GL renderer, we replace gdk_surface_get_scale_factor()
with gdk_surface_get_scale().
When the GL texture already has a mipmap, we don't
have to download and reupload it to generate one.
We differentiate the handling for texture scale nodes,
where we do want to force the mipmap creation even if
it requires us to reupload the GL texture, and plain
texture nodes, where we just take advantage of a
preexisting mipmap to allow trilinear filtering for
downscaling, or create one if we have to upload the
texture anyway.
Store texture coordinates for each slice
instead of assuming 0,0,1,1, and generate
overlapping slices to allow for proper mipmaps.
This almost fixes trilinear filtering with
sliced textures.
We cheat and just set the texture parameters instead and hope nothing
explodes.
So far it didn't.
This is only needed to support GLES 2.0 so it's quite a limited set of
hardware these days.
Instead of uploading a texture once per filter, ensure textures are
uploaded as little as possible and use samplers instead to switch
different filters.
Sometimes we have to reupload a texture unfortunately, when it is an
external one and we want to create mipmaps.
When filtering changes for an already-cached
texture, we need to clear the render data
before setting the new one, otherwise it
does not take and we end up reuploading
the texture every frame.
Allow to set max texture size using the
GSK_MAX_TEXTURE_SIZE environment variable.
We only allow to lower the max (for obvious
reasons), and we don't allow values smaller
than 512 (since our atlases use that size).
Use the same approach and only create an offscreen
that is big enough for the clipped part of the scaled
texture.
If the clipped part is still too large for a single
texture, we give up and just render the texture without
filters (using the regular texture rendering code path
which supports slicing).
The following commit will add the texture-scale-magnify-10000x
test which fails without this fix.
The API docs outline why quite well.
This should make it possible to do saving of textures to image files
without any private API with the same featureset that GTK uses.
Also remove the gsktextureprivate.h include where
gdk_texture_get_format() was the only reason for it.
When we truncate the command queue because it
is too big, we were messing up our state accounting
and running into criticals as a consequence.
This can be reproduced by opening a well-populated
fishbowl demo in the inspectors recorder.
Fixes: #5188
Add GskMaskNode, and support it in the render node
parser, in the inspector and in GtkSnapshot.
The rendering is just fallback for now.
Based on old work by Timm Bäder.
Instead of asserting only in debug builds (which are generally not
shipped in distributions) we should deliver a critical log-level message
so that these can be found sooner when not developing with jhbuild,
Flatpak, etc.
Also assert that we've setup the state correctly when realizing the
GskGLRenderer object.
Fixes#4625
Those property features don't seem to be in use anywhere.
They are redundant since the docs cover the same information
and more. They also created unnecessary translation work.
Closes#4904
We now collect this information during node
construction, so use it here.
The concrete change here is that we now avoid
offscreens for container nodes with multiple children,
as long as they don't overlap. In particular, this
avoid offscreens for ellipsized dim labels.
This fixes two issues with the offscreen rendering code for nodes with
bounds not aligned with the pixel grid:
1.) When drawing to an offscreen buffer the size of the offscreen buffer
was rounded up, but then later when used as texture the vertices
correspond to the original bounds with the unrounded size. This could
then result in the offscreen texture being drawn onscreen at a slightly
smaller size, which then lead to it being visually shifted and blurry.
This is fixed by adjusting the u/v coordinates to ignore the padding
region in the offscreen texture that got added by the size increase from
rounding.
2.) The viewport used when rendering to the offscreen buffer was not
aligned with the pixel grid for nodes at coordinates not aligned with
the pixel grid. Then because the content of the offscreen buffer is not
aligned with the pixel grid and later when used as textures sampling
from it will result in interpolated values for an onscreen pixel. This
could also result in shifting and blurriness, especially for nested
offscreen rendering at different offsets.
This is fixed by adding similar padding at the beginning of the
texture and also adjusting the u/v coordinates to ignore this region.
Fixes: https://gitlab.gnome.org/GNOME/gtk/-/issues/3833
This moves a lot of the texture atlas control out of the driver and into
the various texture libraries through their base GskGLTextureLibrary class.
Additionally, this gives more control to libraries on allocating which can
be necessary for some tooling such as future Glyphy integration.
As part of this, the 1x1 pixel initialization is moved to the Glyph library
which is the only place where it is actually needed.
The compact vfunc now is responsible for compaction and it allows for us
to iterate the atlas hashtable a single time instead of twice as we were
doing previously.
The init_atlas vfunc is used to do per-library initialization such as
adding a 1x1 pixel in the Glyph cache used for coloring lines.
The allocate vfunc purely allocates but does no upload. This can be useful
for situations where a library wants to reuse the allocator from the
base class but does not want to actually insert a key/value entry. The
glyph library uses this for it's 1x1 pixel.
In the future, we will also likely want to decouple the rectangle packing
implementation from the atlas structure, or at least move it into a union
so that we do not allocate unused memory for alternate allocators.
This removes the sharing of atlases across various texture libraries. Doing
so is necessary so that atlases can have different semantics for how they
allocate within the texture as well as potentially allowing for different
formats of texture data.
For example, in the future we might store non-pixel data in the textures
such as Glyphy or even keep glyphs with color content separate from glyphs
which do not and can use alpha channel only.
This allows the gskglprograms.defs a bit more control over how a shader
will get generated and if it needs to combine sources. Currently, none of
the built-in shaders do that, but upcoming shaders which come from external
libraries will need the ability to inject additional sources in-between
layers.
If the max_entry_size is zero, then assume we can add anything to the
atlas. This allows for situations where we might be uploading an arc list
to the atlas instead of pixel data for GPU font rendering.
When large viewports are passed to gsk_renderer_render_texture(), don't
fail (or even return NULL).
Instead, draw multiple tiles and assemble them into a memory texture.
Tests added to the testsuite for this.
There are situations where our "default framebuffer" is not actually
zero, yet we still want to apply a scissor rect.
Generally, 0 is the default framebuffer. But on platforms where we need
to bind a platform-specific feature to a GL_FRAMEBUFFER, we might have a
default that is not 0. For example, on macOS we bind an IOSurfaceRef to
a GL_TEXTURE_RECTANGLE which then is assigned as the backing store for a
framebuffer. This is different than using gsk_gl_renderer_render_texture()
in that we don't want to incur an extra copy to the destination surface
nor do we even have a way to pass a texture_id into render_texture().
If the rendering operation is over an opaque region, we can potentially
avoid clearing a large section of the framebuffer destination. Some cases
you do want to clear, such as when clearing the whole contents as some
drivers have fast paths for that to avoid bringing data back into the
framebuffer.
Instead of just passing major/minor, pass them twice, once for GL and
once for GLES. This way, we don't need to check for GL and GLES
separately.
If something is supported unconditionally, passing 0/0 works fine.
That said, I'd like to group the arguments somehow, because otherwise
it's just a confusing list of numbers - but I have no idea how to do
that.
For libANGLE to work with our shaders, we must use "300 es" for
the #version directive in our shaders, as well as using the non-legacy/
non-GLES codepath in the shaders. In order to check whether we are
using the GLSL 300 es shaders, we check whether we are using a GLES 3.0+
context. As a result, make ->glsl_version a const char* and make sure
the existing shader version macros are defined apprpriately, and add a
new macro for the "300 es" shader version string.
This will allow the gtk4 programs to run under Windows using EGL via
libANGLE. Some of the GL demos won't work for now, but at least this
makes things a lot better for using GL-accelerated graphics under Windows
for those that want to or need to use libANGLE (such as those with
graphics drivers that aren't capable of our Desktop (W)GL requirements in
GTK.
On Windows with nVidia drivers at least, when we create a legacy context
via wglCreateContext(), we may still get a (W)GL 4.x context. Allow
such contexts to also use GLSL version 130 instead of 110, so that
things do continue to work.
But don't call it too early, we only want to call it once we have
prepared the target.
This way, we guarantee that a GL context is always available and that it
is bound to the correct target.
Don't pass texture + rect, but instead have
gdk_memory_texture_new_subtexture()
and use it to generate subtextures and pass them.
This has the advantage of downloading the a too large texture only once
instead of N times.
It does not belong in GdkGLContext, it's a renderer thing.
It's also the only user of that API.
Introduce gdk_gl_context_check_version() private API to make version
checks simpler.
It turns out glReadPixels() cannot convert pixels and you are only
allowed to pass a single value into the function arguments. You need to
know which ones or things will explode.
GL is great.
Pass a format do GdkTextureClass::download(). That way we can download
data in any format.
Also replace gdk_texture_download_texture() with
gdk_memory_texture_from_texture() which again takes a format.
The old functionality is still there for code that wants it: Just pass
gdk_texture_get_format (texture) as the format argument.
Move the resources of each renderer to its subdirectory.
We've previously done that for the ngl renderer, but it
is better to be consistent and do it for all the renderers.
When we are rendering a texture node to an offscreen,
and we have a clip, we must force the offscreen rendering.
Otherwise, the code will notice: Hey, it already is a texture
node, so no need to render it to a texture again. But when
clipping is involved, that is exactly what we want to do.
Testcase included.
Fixes: #3651
According to OpenGL spec, a shader object will only be flagged
for deletion unless it has been detached; when a program object
is deleted, those shader objects attached to it will be detached
but not deleted unless they have already been flagged for deletion.
So we shall detach a shader object before it is deleted, and delete
it before the program object is deleted best.
This way we can render the first frame of tests/testoutsetshadowdrawing
in 153 ops instead of 183.
And the first frame of gtk4-demo in 260 instead of 300.
These positions are not guaranteed to be in a specific order when linked
into the final GPU program. They need to be specified so that our code
in gskglrenderer.c can use known positions for them to match up with
our GskQuadVertex.
This fixes the GL renderer on macOS's OpenGL shader compiler.
Fixes#3420
Catch the error when it happens, so that we can emit a specific and more
helpful error message.
Also verify that all branches in the code now do indeed set a proper
GError when they fail, so that the final catch-all is no longer needed.
Instead, assert that the error is set so that we catch future code
additions early that do not set the GError.
glFrameBufferTexture maps to all faces of a cube and that is not needed
here. Additionally, texture_id is not deleted after we use the additional
flipped texture, but should be.
On desktop GL, GL 1.5 or GL_ARB_occlusion_query is required to get the
glGenQueries() etc. symbols. This isn’t the case on GLES, where they
are provided by GL_EXT_occlusion_query_boolean, and more importantly
have never been made core.
This patch allows gtk4-demo to start when GDK_DEBUG=gl-gles is set, on
my Mali 400 MP running the Lima driver from Mesa.