If glBufferStorage() is available, we can replace our usage of
glBufferSubData() with persistently mapped storage via
glMappedBufferRange().
This has 1 disadvantage:
1. It's not supported everywhere, it requires GL 4.4 or
GL_EXT_buffer_storage. But every GPU of the last 10 years should
implement it. So we check for it and keep the old code.
The old code can also be forced via GDK_GL_DISABLE=buffer-storage.
But it has 2 advantages:
1. It is what Vulkan does, so it unifies the two renderers' buffer
handling.
2. It is a significant performance boost in use cases with large vertex
buffers. Those are pretty rare, but do happen with lots of text at a
small font size. An example would be a small font in a maximized VTE
terminal or the overview in gnome-text-editor.
A custom benchmark tailored for this problem can be created with:
tests/rendernode-create-tests 1000000 text.node
This creates a node file called "text.node" that draws 1 million text
nodes.
(Creating that test takes a minute or so. A smaller number may be useful
on less powerful hardware than my Intel Tigerlake laptop.)
The difference can then be compared via:
tools/gtk4-rendernode-tool benchmark --runs=20 text.node
and
GDK_GL_DISABLE=buffer-storage tools/gtk4-rendernode-tool benchmark --runs=20 text.node
For my laptop, the difference is:
before: 1.1s
after: 0.8s
Related: !7021
Checks which features of a given memory format are supported by
the current GL implementation.
We check:
* usable: Can be used as a texture with NEAREST filter
* renderable: Can be used as a render target
* filterable: Can be used with GL_LINEAR
In normal GL, all formats are all of these things, but GLES is a lot
more picky.
So far nobody uses this.
Vertex arrays are available in GL and in GLES >= 3.
We don't check for the GLES extension that provided
vertex arrays in older GLES, since that requires
using different API.
This api avoids version checks all over the place.
With XWayland and direct scanout it is possible that some apps get into
a situation where more than 2 buffers are in flight and in that case we
want to be able to still track the change regions for those buffers.
Usually 3 buffers are in use, so we go one higher, just to be safe.
The EGL spec states:
The context returned must be the specified version, or a later
version which is backwards compatible with that version.
Even if a later version is returned, the specified version
must correspond to a defined version of the client API.
GTK has so far been relying on EGL implementations returning a
later version, because that is what Mesa does.
But ANGLE does not do that and only provides the minimum version, which
means Windows EGL has been forced to use a lower EGL version for no
reason.
So fix this and try versions in order from highest to lowest.
GLES 2.0 version is fine now with current gtk according to B. Otte.
Let's use the same minimum requirement for all implementations.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Fractional scaling with the GL renderer is
experimental for now, so we disable it unless
GDK_DEBUG=gl-fractional is set.
This will give us time to work out the kinks.
... and use this check in gdk_gl_context_make_current() and
gdk_gl_context_get_current() to make sure the context really is still
current.
The context no longer being current can happen when external GL
implementations make their own contexts current in the same threads GDK
contexts are used in.
And that can happen for example by WebKit.
Theoretically, this should also allow external EGL code to run in X11
applications when GDK chooses to use GLX, but I didn't try it.
Fixes#5392
It is useful for backends to get user set preferences while
ensuring the correctness of the result, which will be always
greater or equal than the minimum version provided
There are situations where our "default framebuffer" is not actually
zero, yet we still want to apply a scissor rect.
Generally, 0 is the default framebuffer. But on platforms where we need
to bind a platform-specific feature to a GL_FRAMEBUFFER, we might have a
default that is not 0. For example, on macOS we bind an IOSurfaceRef to
a GL_TEXTURE_RECTANGLE which then is assigned as the backing store for a
framebuffer. This is different than using gsk_gl_renderer_render_texture()
in that we don't want to incur an extra copy to the destination surface
nor do we even have a way to pass a texture_id into render_texture().
Instead of just passing major/minor, pass them twice, once for GL and
once for GLES. This way, we don't need to check for GL and GLES
separately.
If something is supported unconditionally, passing 0/0 works fine.
That said, I'd like to group the arguments somehow, because otherwise
it's just a confusing list of numbers - but I have no idea how to do
that.
When destroying the EGLSurface or GLXDrawable of a GdkSurface, make sure
the current context is not still bound to it.
If it is, clear the current context.
Fixes#4554
This is an alternative to gdk_surface_create_gl_context() when the
context is meant to only draw to textures.
This is useful in the testsuite or in GStreamer or with GLArea,
basically whenever we want to do GL stuff but don't need to actually
draw anything on screen.
A bunch of code will need to be updated to deal with context->surface
being NULL.
It does not belong in GdkGLContext, it's a renderer thing.
It's also the only user of that API.
Introduce gdk_gl_context_check_version() private API to make version
checks simpler.
Add gdk_gl_context_is_api_allowed() for backends and make them use it.
Finally, have them return the final API as the return value (or 0 on
error).
And then use that api instead of a use_es boolean flag.
Fixes#4221
Unify the X11 and Wayland EGL contexts.
This is a bit ugly to implement, because I don't want to create an
interface and I can't make them inherit from the same object, because
one needs to inherit from X11GLContext and the other from
WaylandGLContext.
So we have to put the code in GdkGLContext and make sure non-EGL
contexts can't accidentally run it. This is rather easy because we can
just check for priv->egl_context != NULL.
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5
Now that we have the display's context to hook into, we can use it to
construct other GL contexts and don't need a GdkSurface vfunc anymore.
This has the added benefit that backends can have different GdkGLContext
classes on the display and get new GLContexts generated from them, so
we get multiple GL backend support per GDK backend for free.
I originally wanted to make this a vfunc on GdkGLContextClass, but
it turns out all the abckends would just call g_object_new() anyway.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
GLES doesn't support the GL_BGRA + GL_UNSIGNED_INT_24_8 hack that
we use on desktop OpenGL to upload textures directly in the cairo
pixel format. This adds the required conversions to all the places
that currently need it.
We also add a data_format to the internal gdk_gl_context_upload_texture()
function to make it clearer what the format are. Currently it is always
the cairo image surface format, but eventually we want to support other
formats so that we can avoid some of the unnecessary conversions we do.
Also, the current gdk_gl_context_upload_texture() code always converts
to a cairo format and uploads that like we did before. Later commits
will allow this to use other upload formats that gl supports to avoid
conversions.
gdk_gl_context_has_framebuffer_blit() and gdk_gl_context_has_frame_terminator()
were only used by by GDK/Win32, and they do not provide performance advantages
in GTK master, so clean up the code a bit by dropping them.
We need to use GL_BGRA instead of GL_RGBA when doing glReadPixels() on
EGL on Windows (ANGLE) so that the red and blue bits won't be displayed
inverted.
Also fix the logic where we determine whether to bit blit or redraw
everything.