This is a way to query the damaged area of the backbuffer.
The GL renderer uses this to compute the extents of that damage region
(computed via buffer age) and use them to minimize the area to redraw.
This changes the semantics of GL rendering to "When calling
gdk_window_begin_frame() with a GL context, the area by
gdk_gl_context_get_damage() needs to be redrawn and every other pixel of
the backbuffer is guaranteed to be correct.
After gdk_window_end_frame() on a GL-drawn window, the whole backbuffer
must be correct.
We can always glXBufferSwap() now because of this.
... instead of a gl context.
This requires some refactoring in the way we mark the shared context as
drawing: We now call begin_frame/end_frame() on it and ignore the call
on the main context.
Unfortunately we need to do this check in all vfuncs, which sucks. But I
haven't found a better way.
This way, we can query the GL context's state via
gdk_gl_context_is_drawing().
Use this function to make GL contexts as attached and grant them access
to the front/backbuffer for rendering.
All of this is still unused because GL drawing is still disabled.
No visible changes as GL rendering is disabled at the moment.
What was done:
1. Move window->invalidate_for_new_frame to glcontext->begin_frame
This moves the code to where it is used (the GLContext) and prepares it
for being called where it is used when actually beginning to draw the
frame.
2. Get rid of buffer-age usage
We want to let the application render directly to the backbuffer.
Because of that, we cannot make any assumptions about the contents the
application renders outside the clip area.
In particular GskGLRenderer renders random stuff there but not actual
contents.
3. Pass the actual GL context
Previously, we passed the shared context to end_frame, now we pass the
actual GL context that the application uses for rendering. This is so
that the vfuncs could prepare the actual contexts for rendering (they
don't currently).
4. Simplify the code
The previous code set up the final drawing method in begin_frame.
Instead, we now just ensure the clip area is something we can render
and decide on the actual method in end_frame.
This is both more robust (we can change the clip area in between if we
want to) and less code.
Calling eglGetDisplay forces libEGL to guess what kind of pointer you
passed it. Different EGL libraries will do different things here, and in
particular glvnd will do something different than Mesa. Since we do have
an API that allows us to explicitly type the display, use it.
The explicit call to eglGetProcAddress is working around a bug in
libepoxy 1.3, which does not understand the EGL concept of client
extensions. Since it does not, the normal epoxy resolver for
eglGetPlatformDisplayEXT would not find any provider for that entry
point, and crash when you attempted to call it.
Signed-off-by: Adam Jackson <ajax@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=772415
EGLDisplays are already opaque pointers, and eglGetDisplay returns an
EGLDisplay not a pointer to one.
Signed-off-by: Adam Jackson <ajax@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=772415
The g_print documentation explicitly says not to do this, since
g_print is meant to be redirected by applications. Instead use
g_message for logging that can be triggered via GTK_DEBUG.
If the shared context is in legacy mode, or if the creation of a core
profile context failed, we fall back to an EGL context in compatibility
mode.
Since we're relying on a fairly new EGL implementation for Wayland, we
don't fall back to the older EGL API, and instead we always require the
EGL_KHR_create_context extension.
https://bugzilla.gnome.org/show_bug.cgi?id=756142
The existence of OpenGL implementations that do not provide the full
core profile compatibility because of reasons beyond the technical, like
llvmpipe not implementing floating point buffers, makes the existence of
GdkGLProfile and documenting the fact that we use core profiles a bit
harder.
Since we do not have any existing profile except the default, we can
remove the GdkGLProfile and its related API from GDK and GTK+, and sweep
the whole thing under the carpet, while we wait for an extension that
lets us ask for the most compatible profile possible.
https://bugzilla.gnome.org/show_bug.cgi?id=744407
Now that we have a two-stages GL context creation sequence, we can move
the profile to a pre-realize option, like the debug and forward
compatibility bits, or the GL version to use.
One of the major requests by OpenGL users has been the ability to
specify settings when creating a GL context, like the version to use
or whether the debug support should be enabled.
We have a couple of requirements in terms of API:
• avoid, if at all possible, the "C arrays of integers with
attribute, value pairs", which are hard to write and hard
to bind in non-C languages.
• allow failing in a recoverable way.
• do not make the GL context creation API a mess of arguments.
Looking at prior art, it seems that a common pattern is to split the
construction phase in two:
• a first phase that creates a GL context wrapper object and
does preliminary checks on the environment.
• a second phase that creates the backend-specific GL object.
We adopted a similar pattern:
• gdk_window_create_gl_context() creates a GdkGLContext
• gdk_gl_context_realize() creates the underlying resources
Calling gdk_gl_context_make_current() also realizes the context, so
simple GL users do not need to care. Advanced users will want to
call gdk_window_create_gl_context(), set up the optional requirements,
and then call gdk_gl_context_realize(). If either of these two steps
fails, it's possible to recover by changing the requirements, or simply
creating a new GdkGLContext instance.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
We need to use this in the code path where we make the context
non-current during destroy, because at that point the window
could be destroyed and gdk_window_get_display() would return
NULL.
This is not really needed. The gl context is totally tied to the
window it is created from by virtue of sharing the context with the
paint context of that window and that context always has the visual
of the window (which we already can get).
Also, all user visible contexts are essentially offscreen contexts, so
a visual doesn't make sense for them. They only use FBOs which have
whatever format that the users sets up.
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.
Its not really reasonable to handle failures to make_current, it
basically only happens if you pass invalid arguments to it, and
thats not something we trap on similar things on the X drawing side.
If GL is not supported that should be handled by the context creation
failing, and anything going wrong after that is essentially a critical
(or an async X error).
We make user facing gl contexts not attached to a surface if possible,
or attached to dummy surfaces. This means nothing can accidentally
read/write to the toplevel back buffer.