If glBufferStorage() is available, we can replace our usage of
glBufferSubData() with persistently mapped storage via
glMappedBufferRange().
This has 1 disadvantage:
1. It's not supported everywhere, it requires GL 4.4 or
GL_EXT_buffer_storage. But every GPU of the last 10 years should
implement it. So we check for it and keep the old code.
The old code can also be forced via GDK_GL_DISABLE=buffer-storage.
But it has 2 advantages:
1. It is what Vulkan does, so it unifies the two renderers' buffer
handling.
2. It is a significant performance boost in use cases with large vertex
buffers. Those are pretty rare, but do happen with lots of text at a
small font size. An example would be a small font in a maximized VTE
terminal or the overview in gnome-text-editor.
A custom benchmark tailored for this problem can be created with:
tests/rendernode-create-tests 1000000 text.node
This creates a node file called "text.node" that draws 1 million text
nodes.
(Creating that test takes a minute or so. A smaller number may be useful
on less powerful hardware than my Intel Tigerlake laptop.)
The difference can then be compared via:
tools/gtk4-rendernode-tool benchmark --runs=20 text.node
and
GDK_GL_DISABLE=buffer-storage tools/gtk4-rendernode-tool benchmark --runs=20 text.node
For my laptop, the difference is:
before: 1.1s
after: 0.8s
Related: !7021
The ngl renderer has good support for fractional scaling, so we
can enable this by default now.
If you are using the gl renderer, you can disable fractional
scaling with the
GDK_DEBUG=gl-no-fractional
environment variable.
According to EXT_color_buffer_half_float it should be renderable, but it
fails to glGenerateMipmap() with Mesa 23.3 so just pretend it's not
renderable until that is fixed.
Fixes CI from failing.
I naively assumed the EXT_color_buffer_float and
EXT_color_buffer_half_float extensions would mirror each other, but they
do not. The float extension explicitly excludes RGB32F from the
renderable formats.
This makes no sense by itself, but we want to create the EGLImage at
DmabufTexture construction so that we can actually reject dmabufs that
we can't create EGLImages for.
This will make it possible to bail when the stride limitation for AMD
GPUs hits.
Checks which features of a given memory format are supported by
the current GL implementation.
We check:
* usable: Can be used as a texture with NEAREST filter
* renderable: Can be used as a render target
* filterable: Can be used with GL_LINEAR
In normal GL, all formats are all of these things, but GLES is a lot
more picky.
So far nobody uses this.
With the advent of dmabuf support, using GLES has become more
attractive, since we can use its external texture support to
support more dmabuf formats.
You can go back to the previous preference order by setting
GDK_DEBUG=gl-prefer-gl
Make sure all our dmabuf debug messages are display-scoped so the
inspector doesn't trigger them, use the same formatting throughout,
and improve consistency of wording here and there.
It started out as busywork, but it does many separate things. If I could
start over, I'd take them apart into multiple commits:
1. Remove G_ENABLE_DEBUG around GDK_DEBUG_*() calls
This is not needed at all, the calls themselves take care of it.
2. Remove G_ENABLE_DEBUG around profiling code
This now enables profiling support in release builds.
3. Stop poking _gdk_debug_flags and use GDK_DEBUG_CHECK()
This was old code that was never updated.
4. Make !G_ENABLE_DEBUG turn off GDK_DEBUG_CHECK()
The code used to
#define GDK_DEBUG_CHECK(...) false
#define GDK_DEBUG(...)
which would compile away all the code inside those macros. This
means a lot of variable definitions and debug utility functions
would suddenly no longer be used and cause compiler errors.
Vertex arrays are available in GL and in GLES >= 3.
We don't check for the GLES extension that provided
vertex arrays in older GLES, since that requires
using different API.
This api avoids version checks all over the place.
Now that all contexts do that, insist that they keep doing it.
And because they keep doing it, we can support querying the GL version
from gdk_gl_context_get_version() without requiring the context to be
made current.
The EGL spec states:
The context returned must be the specified version, or a later
version which is backwards compatible with that version.
Even if a later version is returned, the specified version
must correspond to a defined version of the client API.
GTK has so far been relying on EGL implementations returning a
later version, because that is what Mesa does.
But ANGLE does not do that and only provides the minimum version, which
means Windows EGL has been forced to use a lower EGL version for no
reason.
So fix this and try versions in order from highest to lowest.
GLES 2.0 version is fine now with current gtk according to B. Otte.
Let's use the same minimum requirement for all implementations.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Fractional scaling with the GL renderer is
experimental for now, so we disable it unless
GDK_DEBUG=gl-fractional is set.
This will give us time to work out the kinks.