Previously, code would work fine with --disable-vulkan if the Vulkan
headers were installed - code would happily just use them as they're
installed in /usr/include.
Instead, complain if somebody calls gdk_x11_window_get_xid() on a
non-native window.
We cannot make random windows native anymore because there's no GSK
renderer associated with them, so we cannot draw them.
gdk_window_create_vulkan_context() now exists and will return a Vulkan
context for the given window. It even initializes the surface. But it
doesn't do anything useful yet.
Adds the gdk_display_ref_vulkan() and gdk_display_unref_vulkan()
functions which setup/tear down VUlkan support for the display.
Nothing is using those functions yet.
I read the code as if (use_gl) instead of if (!use_gl) and commented it
out in bddfd7bb41. That broke drawing on
Wayland without OpenGL completely.
Whoops.
Now it's back.
There were some parts that need some updates after the refactoring in
GDKGL, so that the code will continue to build and run.
For gdkwindow-win32.c, comment out the parts where we check for use_gl
(which was removed), since we are going to move all drawing to OpenGL,
but don't remove/disable the whole portion as that transition is not
complete at this point.
There a is new GDKGL function that checks for the damaged area of the back
buffer, but since the notion of "damage" is for *NIX (GLX/EGL for
Wayland/mir), meaning that there is no such extension for Windows in this
regard, so we can't support this on Windows as-is, at least for now.
https://bugzilla.gnome.org/show_bug.cgi?id=773299
This is a way to query the damaged area of the backbuffer.
The GL renderer uses this to compute the extents of that damage region
(computed via buffer age) and use them to minimize the area to redraw.
This changes the semantics of GL rendering to "When calling
gdk_window_begin_frame() with a GL context, the area by
gdk_gl_context_get_damage() needs to be redrawn and every other pixel of
the backbuffer is guaranteed to be correct.
After gdk_window_end_frame() on a GL-drawn window, the whole backbuffer
must be correct.
We can always glXBufferSwap() now because of this.
... instead of a gl context.
This requires some refactoring in the way we mark the shared context as
drawing: We now call begin_frame/end_frame() on it and ignore the call
on the main context.
Unfortunately we need to do this check in all vfuncs, which sucks. But I
haven't found a better way.
Reenable GL drawing, but do it without Cairo.
Now, the context passed to gdk_window_begin_draw_frame() decides how
drawing is going to happen. If it is NULL, Cairo is used like before.
If a context is passed, Cairo may not be used for drawing and
gdk_drawing_context_get_cairo_context() is going to return NULL.
Instead, the GL renderer must draw to the GL backbuffer and
end_draw_frame() is then swapping that to the front.
The GskGLRenderer has lost the texture it used to render to and adapted
to render directly to the backbuffer instead.
The only thing missing is for GtkGLArea to gain back a performant way to
render. But it didn't have one since the introduction of GSK, this
patchset doesn't change anything about it.
The new rendering avoids two indirections (the GSK renderer's texture
and the GDK double buffering surface).
It improves icon count in the fishbowl demo by 30%.
This way, we can query the GL context's state via
gdk_gl_context_is_drawing().
Use this function to make GL contexts as attached and grant them access
to the front/backbuffer for rendering.
All of this is still unused because GL drawing is still disabled.
No visible changes as GL rendering is disabled at the moment.
What was done:
1. Move window->invalidate_for_new_frame to glcontext->begin_frame
This moves the code to where it is used (the GLContext) and prepares it
for being called where it is used when actually beginning to draw the
frame.
2. Get rid of buffer-age usage
We want to let the application render directly to the backbuffer.
Because of that, we cannot make any assumptions about the contents the
application renders outside the clip area.
In particular GskGLRenderer renders random stuff there but not actual
contents.
3. Pass the actual GL context
Previously, we passed the shared context to end_frame, now we pass the
actual GL context that the application uses for rendering. This is so
that the vfuncs could prepare the actual contexts for rendering (they
don't currently).
4. Simplify the code
The previous code set up the final drawing method in begin_frame.
Instead, we now just ensure the clip area is something we can render
and decide on the actual method in end_frame.
This is both more robust (we can change the clip area in between if we
want to) and less code.
This is a temporary switch-off of the GL dawing code that will make
things keep running. All GL related code (like the GSK renderer or
GtkGLArea will now fall back to software.
Wayland subsurfaces can have other native window parents, but those need
to be destroyed along with the rest of the window hierarchy otherwise
an assert() is reached.
https://bugzilla.gnome.org/show_bug.cgi?id=774915
gdk_window_get_toplevel() walks up the windows tree looking for the
corresponding toplevel window, but needs to account for subsurfaces as
well on Wayland.
https://bugzilla.gnome.org/show_bug.cgi?id=775319
Now that subsurfaces can be created as child of another GdkWindow (and
not just the root window), they must be placed according to the location
of their parent, i.e. the abs_x/abs_y must be updated and taken int
account when placing and moving subsurfaces under Wayland.
https://bugzilla.gnome.org/show_bug.cgi?id=774917