This has the benefit that we can refactor it and make sure we deal with
GdkDisplay::init_gl() not being called at all because
GDK_DEBUG=gl-disable had been specified.
It's not used there, but both backends have independent
immplementationgs for it.
I want to get rid of GdkGLContextX11 and moving code from it is the
first step.
Now that we have the display's context to hook into, we can use it to
construct other GL contexts and don't need a GdkSurface vfunc anymore.
This has the added benefit that backends can have different GdkGLContext
classes on the display and get new GLContexts generated from them, so
we get multiple GL backend support per GDK backend for free.
I originally wanted to make this a vfunc on GdkGLContextClass, but
it turns out all the abckends would just call g_object_new() anyway.
Instead of
Display::make_gl_context_current()
we now have
GLContext::clear_current()
GLContext::make_current()
This fits better with the backends (we can actually implement
clearCurrent on macOS now) and makes it easier to implement different GL
backends for backends (like EGL/GLX on X11).
We also pass a surfaceless boolean to make_current() so the calling code
can decide if a surface needs to be bound or not, because the backends
were all doing whatever, which was very counterproductive.
... or more exactly: Only use paint contexts with
gdk_cairo_draw_from_gl().
Instead of paint contexts being the only contexts who call swapBuffer(),
any context can be used for this, when it's used with
begin_frame()/end_frame().
This removes 2 features:
1. We no longer need a big sharing hierarchy. All contexts are now
shared with gdk_display_get_gl_context().
2. There is no longer a difference between attached and non-attached
contexts. All contexts work the same way.
The vfunc is called to initialize GL and it returns a "base" context
that GDK then uses as the context all others are shared with. So the GL
context share tree now looks like:
+ context from init_gl
- context1
- context2
...
So this is a flat tree now, the complexity is gone.
The only caveat is that backends now need to create a GL context when
initializing GL so some refactoring was needed.
Two new functions have been added:
* gdk_display_prepare_gl()
This is public API and can be used to ensure that GL has been
initialized or if not, retrieve an error to display (or debug-print).
* gdk_display_get_gl_context()
This is a private function to retrieve the base context from
init_gl(). It replaces gdk_surface_get_shared_data_context().
We try EGL first, but are very picky about what we accept.
If that fails, we try to go with GLX instead.
And if that also fails, we try EGL again, but this time accept anything.
The idea here is that EGL is the preferred method going forward, but GLX is
the tried and tested method that we know works. So if we detect issues with
EGL, we want to avoid using it in favor of GLX.
Also add a GDK_DEBUG=gl-egl option to force EGL at all costs and not try
GLX.
That way, we can give a useful error message when things break down for
users.
These error messages could still be improved in places (like looking at
the actual EGL error codes), but that seemed overkill.
Instead of going via GdkVisual, doing a preselection and letting the GL
initialization improve it, let the GL initialization pick an X Visual
directly using X Visual code directly.
The code should select the same visuals as before as it tries to apply
the same logic, but it's a rewrite, so I expect I messed something up.
Avoids having to use private data, though the benefit is somewhat
limited as we still have to put the destructor in the egl code and can't
just put it in gdk_surface_x11_finalize().
We need to initialize GL to select the Visual we are going to use for
all our Windows.
As the Visual needs to be known before we know if we are even gonna use
GL later, we can't avoid initializing it.
Note that this previously happened, too. It was just hidden behind the
GdkScreen initialization.
If we want to add an EGL implementation for the X11 backend, we are
going to need to move the GLX bits into their own class. The first step
is to declare GdkX11GLContext as an abstract type, and then subclass it
into a GdkX11GLContextGLX type, which includes the whole GLX
implementation.
With the vendor provided Nvidia driver there is a small window of time
after drawing to a GL surface before the updates to that surface
can be used by the compositor.
Drawing is already coordinated with the compositor through the frame
synchronization protocol detailed here:
https://fishsoup.net/misc/wm-spec-synchronization.html
Unfortunately, at the moment, GdkX11Surface tells the compositor the
frame is ready immediately after drawing to the surface, not later,
when it's consumable by the compositor.
This commit defers announcing the frame as ready until it's consumable
by the compositor. It does this by listening for the X server to announce
damage events associated with the frame drawing. It tries to find the
right damage event by waiting until fence placed at buffer swap time
signals.
Rename all *window.[ch] source files.
This is an automatic operation, done by the following commands:
for i in $(git ls-files gdk | grep window); do
git mv $i $(echo $i | sed s/window/surface/);
git sed -f g $(basename $i) $(basename $i | sed s/window/surface/) ;
done
git checkout NEWS* po-properties po
This renames the GdkWindow class and related classes (impl, backend
subclasses) to surface. Additionally it renames related types:
GdkWindowAttr, GdkWindowPaint, GdkWindowWindowClass, GdkWindowType,
GdkWindowTypeHint, GdkWindowHints, GdkWindowState, GdkWindowEdge
This is an automatic conversion using the below commands:
git sed -f g GdkWindowWindowClass GdkSurfaceSurfaceClass
git sed -f g GdkWindow GdkSurface
git sed -f g "gdk_window\([ _\(\),;]\|$\)" "gdk_surface\1" # Avoid hitting gdk_windowing
git sed -f g "GDK_WINDOW\([ _\(]\|$\)" "GDK_SURFACE\1" # Avoid hitting GDK_WINDOWING
git sed "GDK_\([A-Z]*\)IS_WINDOW\([_ (]\|$\)" "GDK_\1IS_SURFACE\2"
git sed GDK_TYPE_WINDOW GDK_TYPE_SURFACE
git sed -f g GdkPointerWindowInfo GdkPointerSurfaceInfo
git sed -f g "BROADWAY_WINDOW" "BROADWAY_SURFACE"
git sed -f g "broadway_window" "broadway_surface"
git sed -f g "BroadwayWindow" "BroadwaySurface"
git sed -f g "WAYLAND_WINDOW" "WAYLAND_SURFACE"
git sed -f g "wayland_window" "wayland_surface"
git sed -f g "WaylandWindow" "WaylandSurface"
git sed -f g "X11_WINDOW" "X11_SURFACE"
git sed -f g "x11_window" "x11_surface"
git sed -f g "X11Window" "X11Surface"
git sed -f g "WIN32_WINDOW" "WIN32_SURFACE"
git sed -f g "win32_window" "win32_surface"
git sed -f g "Win32Window" "Win32Surface"
git sed -f g "QUARTZ_WINDOW" "QUARTZ_SURFACE"
git sed -f g "quartz_window" "quartz_surface"
git sed -f g "QuartzWindow" "QuartzSurface"
git checkout NEWS* po-properties
This way, we can query the GL context's state via
gdk_gl_context_is_drawing().
Use this function to make GL contexts as attached and grant them access
to the front/backbuffer for rendering.
All of this is still unused because GL drawing is still disabled.
No visible changes as GL rendering is disabled at the moment.
What was done:
1. Move window->invalidate_for_new_frame to glcontext->begin_frame
This moves the code to where it is used (the GLContext) and prepares it
for being called where it is used when actually beginning to draw the
frame.
2. Get rid of buffer-age usage
We want to let the application render directly to the backbuffer.
Because of that, we cannot make any assumptions about the contents the
application renders outside the clip area.
In particular GskGLRenderer renders random stuff there but not actual
contents.
3. Pass the actual GL context
Previously, we passed the shared context to end_frame, now we pass the
actual GL context that the application uses for rendering. This is so
that the vfuncs could prepare the actual contexts for rendering (they
don't currently).
4. Simplify the code
The previous code set up the final drawing method in begin_frame.
Instead, we now just ensure the clip area is something we can render
and decide on the actual method in end_frame.
This is both more robust (we can change the clip area in between if we
want to) and less code.
The existence of OpenGL implementations that do not provide the full
core profile compatibility because of reasons beyond the technical, like
llvmpipe not implementing floating point buffers, makes the existence of
GdkGLProfile and documenting the fact that we use core profiles a bit
harder.
Since we do not have any existing profile except the default, we can
remove the GdkGLProfile and its related API from GDK and GTK+, and sweep
the whole thing under the carpet, while we wait for an extension that
lets us ask for the most compatible profile possible.
https://bugzilla.gnome.org/show_bug.cgi?id=744407
Now that we have a two-stages GL context creation sequence, we can move
the profile to a pre-realize option, like the debug and forward
compatibility bits, or the GL version to use.
If buffer age is undefined and the updated area is not the whole
window then we use bit-blits instead of swap-buffers to end the
frame.
This allows us to not repaint the entire window unnecessarily if
buffer_age is not supported, like e.g. with DRI2.
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.
We want to create windows with the default visuals such that we then
have the right visual for GLX when we want to create the paint GL
context for the window.
For instance, (in bug 738670) the default rgba visual we picked for the
NVidia driver had an alpha size of 0 which gave us a BadMatch when later
trying to initialize a gl context on it with a alpha FBConfig.
Instead of just picking what the Xserver likes for the default, and just
picking the first rgba visual we now actually call into GLX to pick
an appropriate visual.
Its not really reasonable to handle failures to make_current, it
basically only happens if you pass invalid arguments to it, and
thats not something we trap on similar things on the X drawing side.
If GL is not supported that should be handled by the context creation
failing, and anything going wrong after that is essentially a critical
(or an async X error).
We make user facing gl contexts not attached to a surface if possible,
or attached to dummy surfaces. This means nothing can accidentally
read/write to the toplevel back buffer.
This adds the new type GdkGLContext that wraps an OpenGL context for a
particular native window. It also adds support for the gdk paint
machinery to use OpenGL to draw everything. As soon as anyone creates
a GL context for a native window we create a "paint context" for that
GdkWindow and switch to using GL for painting it.
This commit contains only an implementation for X11 (using GLX).
The way painting works is that all client gl contexts draw into
offscreen buffers rather than directly to the back buffer, and the
way something gets onto the window is by using gdk_cairo_draw_from_gl()
to draw part of that buffer onto the draw cairo context.
As a fallback (if we're doing redirected drawing or some effect like a
cairo_push_group()) we read back the gl buffer into memory and composite
using cairo. This means that GL rendering works in all cases, including
rendering to a PDF. However, this is not particularly fast.
In the *typical* case, where we're drawing directly to the window in
the regular paint loop we hit the fast path. The fast path uses opengl
to draw the buffer to the window back buffer, either by blitting or
texturing. Then we track the region that was drawn, and when the draw
ends we paint the normal cairo surface to the window (using
texture-from-pixmap in the X11 case, or texture from cairo image
otherwise) in the regions where there is no gl painted.
There are some complexities wrt layering of gl and cairo areas though:
* We track via gdk_window_mark_paint_from_clip() whenever gtk is
painting over a region we previously rendered with opengl
(flushed_region). This area (needs_blend_region) is blended
rather than copied at the end of the frame.
* If we're drawing a gl texture with alpha we first copy the current
cairo_surface inside the target region to the back buffer before
we blend over it.
These two operations allow us full stacking of transparent gl and cairo
regions.