We do this because:
a) The parent class (GdkGLContext) already stores the paint regions of
previous frames, no need to do the same.
b) The painted region passed to end_frame () includes the backbuffer's
damage region, so it's not really what we want.
This also fixes a leak of cairo_region_t that I introduced by mistake
in !7418
The backbuffer's damage region is the region of the backbuffer
that doesn't contain up-to-date contents. This is determined
by the backbuffer's age and previous frame's paint regions.
This enables incremental rendering
We want to get PFD_SWAP flags as that's required to enable incremental
rendering. However, as documented on MSDN, ChoosePixelFormat ignores
PFD_SWAP flags.
We may get PFD_SWAP flags or not depending on the way the OpenGL driver
orders its pixel formats. While PFD_SWAP flags are very important for
GUI toolkits, they are best avoided by games, as most games render the
scene in its entirety on each frame. Drivers optimized for games tend
to order pixel formats with no PFD_SWAP flags first.
Se we implement our own method to select the best pixel format. We check
for usable pixel formats and assign penalties for each one, until we find
a format with 0 penalty or the sequence ends. Then the best pixel format
is selected.
Getting a defined swap method is necessary to enable incremental
rendering. We cannot just add a swap method attribute since
exchange is generally preferred, but is not always available.
Here we try to infer what the driver prefers, then ask for
exchange or copy, in sequence.
Make begin_frame() set a rendering colorstate and depth, and provide it
to the renderers via gdk_draw_context_get_depth() and
gdk_draw_context_get_color_state().
This allows the draw contexts to define their own values, so that ie the
Cairo and GL renderer can choose different settings for rendering (in
particular, GL can choose GL_SRGB and do the srgb conversion; while
Cairo relies on the renderer).
For completeness' sake, also specifiy in the PIXELFORMATDESCRIPTOR to use no
depth, stencil and accum bits to initializing WGL when we can't (yet) use
wglChoosePixelFormatARB(), as we must always fist have a base legacy WGL
context using ChoosePixelFormat() before we can use that to use
wglChoosePixelFormatARB(), or if wglChoosePixelFormatARB() is somehow not
available for us.
Some drivers, however, enforces enabling depth buffers, so if we can't
acquire a pixel format that disables depth buffers, retry acquiring one
with that, which sadly is not optimal but we must make do.
Attempts to complete fix for issue #6401.
Unspecified attributes are not interpreted as "leave this feature out",
rather as "pick a default value". For depth, stencil and accum bits the
defaults may be different than 0. For example, with AMD drivers we get:
* WGL_DEPTH_BITS_ARB: 32
* WGL_STENCIL_BITS_ARB: 8
* WGL_ACCUM_BITS_ARB: 0
Set the attributes to 0 as a hint that depth, stencil and accum buffers
should not be created.
The driver may still create them (matching criteria is "minimum" [1]),
but that's outside of our control (and unlikely to happen).
References:
[1] - WGL_ARB_pixel_format specification
https://registry.khronos.org/OpenGL/extensions/ARB/WGL_ARB_pixel_format.txt
See #6401
It started out as busywork, but it does many separate things. If I could
start over, I'd take them apart into multiple commits:
1. Remove G_ENABLE_DEBUG around GDK_DEBUG_*() calls
This is not needed at all, the calls themselves take care of it.
2. Remove G_ENABLE_DEBUG around profiling code
This now enables profiling support in release builds.
3. Stop poking _gdk_debug_flags and use GDK_DEBUG_CHECK()
This was old code that was never updated.
4. Make !G_ENABLE_DEBUG turn off GDK_DEBUG_CHECK()
The code used to
#define GDK_DEBUG_CHECK(...) false
#define GDK_DEBUG(...)
which would compile away all the code inside those macros. This
means a lot of variable definitions and debug utility functions
would suddenly no longer be used and cause compiler errors.
Use &__ImageBase for the GTK DLL and GetModuleHandle (NULL)
for the application module. Then remove DllMain as it's not
necessary anymore.
References:
[1] Accessing the current module's HINSTANCE from a static library:
https://devblogs.microsoft.com/oldnewthing/20041025-00/?p=37483
...when we are using wglChoosePixelFormatARHB(). This ensures that we
hvae a HDC with a pixel format that will really support alpha bits, as
we did for the traditional ChoosePixelFormat().
Thanks to Patrick Zacharias for testing and pointing things out.
We were sending random junk to ChoosePixelFormat().
Also assert that we don't overflow the array. That might be usefu to
know if we carelessly add attributes later.
... for creating the actual WGL contexts, so that we can cut down on the
number of times where we need to create the base, legacy WGL contexts in
order to create the WGL contexts with attributes. We could just use the
dummy context that we have to make it current to create the needed
WGL contexts.
If we are querying the best supported pixel format for our HDC via
wglChoosePixelFormatARB() (i.e. we have the WGL_ARB_pixel_format extension),
it may return a pixel format that is different from the pixel format that we
used for the dummy context that we have setup, in order to, well, run
wglChoosePixelFormatARB(), which sadly requires a WGL context (HGLRC) to be
current in order to use it, which means the dummy HDC already has a pixel
format that has been set (notice that each HDC is only allowed to have its
pixel format to be set *once*). This is notably the case on Intel display
drivers.
Since we are emulating surfaceless GL contexts, we are using the dummy GL
context (and thus dummy HDC that is derived from the notification HWND used in
GdkWin32Display) for doing that, we would get into trouble if th actual HDC
from the GdkWin32Surface has a different pixel format set.
So, as a result, in order to fix this situation, we do the following:
* Create yet another dummy HWND in order to grab the HDC to query for the
capabilities the GL drivers support, and to call wglChoosePixelFormatARB() as
appropriate (or ChoosePixelFormat()) for the final pixel format that we use.
* Ditch the dummy GL context, HDC and HWND after obtaining the pixel format.
* Then set the final pixel format that we obtained onto the HDC that is derived
from the HWND used in GdkWin32Display for notifications, which will become our
new dummy HDC.
* Create a new dummy HGLRC for use with the new dummy HDC to emulate surfaceless
GL support.
We are currently using g_clear_pointer() on the intermediate WGL contexts
(HGLRC)'s that we need to create in the way, which means that we need to ensure
that the correct calling convention for wglDeleteContext() is being applied.
To be absolutely safe about it, use the gdk_win32_private_wglDeleteContext()
calls, which will in turn call wglDeleteContext() directly from opengl32.dll
(using the OpenGL headers from the Windows SDK) instead of going via libepoxy,
which will assure us that the correct calling convention is applied.
Fixes issue #5808.
In particular, we want to get the GL version, when the Windows box/VM
has an unsuitable GL implementation.
This is somewhat helpful in analyzing failures to bring up GL on
machines where users claim GL does work.
This way, we can realize it and either print success information about
it or return NULL if that fails.
This makes it more likely that we fail early, which means we can then
initialize EGL.
This refactor achieves the following:
* check GL version against proper matching context version
In particular, for legacy contexts, we now actually check
* make sure the actual version is set, even for legacy contexts
* make sure set_is_legacy() is set properly
We might be dealing with GL contexts from different threads, which have more
gotchas when we are using libepoxy, so in case the function pointers for
these are invalidated by wglMakeCurrent() calls outside of GTK/GDK, such as
in GstGL, we want to use these functions that are directly linked to
opengl32.dll provided by the system/ICD, by linking to opengl32.lib.
This will ensure that we will indeed call the "correct" wgl* functions that
we need.
This should help fix issue #5685.
... and use this check in gdk_gl_context_make_current() and
gdk_gl_context_get_current() to make sure the context really is still
current.
The context no longer being current can happen when external GL
implementations make their own contexts current in the same threads GDK
contexts are used in.
And that can happen for example by WebKit.
Theoretically, this should also allow external EGL code to run in X11
applications when GDK chooses to use GLX, but I didn't try it.
Fixes#5392
By setting and then getting the required version in a context, the code
was not respecting user requirements. Instead, simply get the requested
version by the user clipped by the requirements (display version)
As per Benjamin's suggestions, cleanup the previous implementation on
initializing the GLES context on Windows, so that we use more items that are
already in GDK proper and integrate two functions into one.
Instead of first trying to explicitly ask for a WGL 4.1 context, ask for
the WGL context version that matches what is reported via
epoxy_gl_version(), so that we get the maximum WGL version that is
supported by the graphics drivers, and make sure any WGL contexts that
are shared with this (initial) WGL context are created likewise.
We can try to do a default-bog-standard 3.2 core WGL context creation
if the need arises, but let's leave that alone for now.
We are now able to create EGL contexts properly on Windows, but not GLES. This
tries to fix things by doing the following:
* Record the GL context type in a more proper fashion, using an Enum. This
makes things a bit cleaner.
* Force GLES-3.0+ contexts, since libANGLE requires this to properly work with
the shaders-its 2.0 contexts don't work well with our shaders.
This will port the EGL code in GDK-Win32 to use the common GDK code to
initialize EGL. However, at the current state, although EGL is
correctly initialized, this code is disabled for now since
gdk_gl_context_make_current() fails as the shaders do not work for EGL
via libANGLE on Windows.
We can now clean things up in gdkglcontext-win32-egl.c as a result.
Add gdk_gl_context_is_api_allowed() for backends and make them use it.
Finally, have them return the final API as the return value (or 0 on
error).
And then use that api instead of a use_es boolean flag.
Fixes#4221
The term "hdr" is so overloaded, we shouldn't use them anywhere, except
from maybe describing all of this work in blog posts and other marketing
materials.
So do renames:
* hdr => high_depth
* request_hdr => prefers_high_depth
This more accurately describes what is going on.
Creative people managed to create an X11 display and a Wayland display
at once, thereby getting EGL and GLX involved in a fight to the death
over the ownership of the glFoo() symbolspace.
A way to force such a fight with available tools here is (on Wayland)
running something like:
GTK_INSPECTOR_DISPLAY=:1 GTK_DEBUG=interactive gtk4-demo
Related: xdg-desktop-portal-gnome#5