That way, we can give a useful error message when things break down for
users.
These error messages could still be improved in places (like looking at
the actual EGL error codes), but that seemed overkill.
Query the EGL_VISUAL_ID from the egl Config and select a config with the
matching Visual.
This is currently broken on Mesa because it does not expose any RGBA
X Visuals in any EGL config, so we always end up with opaque Windows.
https://gitlab.freedesktop.org/mesa/mesa/-/issues/149
This reverts commit c35a6725b9.
This approach doesn't work because if NVIDIA doesn't work for EGL, the
EGL implementation won't be provided by NVIDIA, so checking the vendor
doesn't work.
Instead, use the display's "leader surface" when no surface is required,
because we have it lying around.
Really, we want to use EGL_NO_SURFACE, but if that's not supported...
Instead of going via GdkVisual, doing a preselection and letting the GL
initialization improve it, let the GL initialization pick an X Visual
directly using X Visual code directly.
The code should select the same visuals as before as it tries to apply
the same logic, but it's a rewrite, so I expect I messed something up.
1. We're using EGL most of the time anyway, so if we wanted to cache
things, we'd need to port it there.
2. Our GL handling is massively configurable, so determining when to use
the cache and when not is a challenge.
3. It makes startup nondeterministic and depend on whether a GTK4 app
has previously been started on this display and nobody thinks about
that when debugging.
4. The only benefit of the caching is delaying GL initialization - which
made sense in GTK3 where almost no app used GL but doesn't make sense
in GTK4 where almost every app uses GL.
So unless I find a big benefit to reintroducing it, this cache will be
gone for good.
Avoids having to use private data, though the benefit is somewhat
limited as we still have to put the destructor in the egl code and can't
just put it in gdk_surface_x11_finalize().
We only have one config, because we use the same Visual everywhere.
Store this config in the GdkDisplayX11 struct for easy access.
Also do this on initialize, because if creating the config fails, we
want to switch to GLX instead of failing to do GL at all.
This also simplifies a lot of code as we can share Visual, Colormap, etc
across surfaces.
There's no need to use g_object_set_data() for it.
We can also stop caching it elsewhere because we know the display has
it.
And finally, we can remove the display->have_egl boolean and use
display->egl_display != NULL instead. We initialize the display at
startup, so that variable is the perfect indicator.
We need to initialize GL to select the Visual we are going to use for
all our Windows.
As the Visual needs to be known before we know if we are even gonna use
GL later, we can't avoid initializing it.
Note that this previously happened, too. It was just hidden behind the
GdkScreen initialization.
We don't want to bind ourselves to GTK3 - both because we don't want to
accidentally cause bugs in a different codebase and because we want to
deviate from it.
While doing so, also store visuals as visuals and not as integers.
And only store one Visual because GTK4 only uses one visual.
And then remove the code that is leftover for dealing with the
compatibility Visual for GTK3.
PS: I'm kinda proud of my STRINGIFY_WITHOUT_BRACKETS hack.
The old code was ordering visuals by depth, but considering that these
days we either use the default visual or a 32bit RGBA visual, that
reordering does not have an effect anymore.
In theory, the only effect is that the GLX Visual selection might select
a different replacement Visual when it checks for improved GL Visuals, but
even there I can't come up with a case where that matters, because
again, the visuals are only reordered by depth and we want to keep the
depth.
In any case, make this a separate commit so bisecting can find this
problem if it ever shows up.
Instead of the display telling the screen to tell the visuals to tell
the display to initialize itself, just init the display directly.
What a concept.
That's a sneaky trick so my edit/compile/test cycle goes faster:
I usually use demos for testing so the tools don't have to be linked for
me to start testing.
If the pointer capability is added, pointer swipe and pinch gestures
will be created. However, if the pointer capability is removed, the
gesture objects won't be destroyed.
If the pointer capability is removed and added several times in a row,
for example due to plugging and unplugging physical mouse, this can lead
to leaking the old gesture objects.
In order to prevent that, this change makes the seat destroy swipe and
pinch gestures when the pointer capability is withdrawn.
It's only used during DND to allow use of the root window's cow window
as a DND target, because apparently gnome-shell used to think that was a
great idea to DND to the overview.
Somebody complain to gnome-shell devs about it not being a good idea if
they want it fixed.
Potentially using Wayland is a better idea though.
This reverts 85ae875dcb
Related: https://bugzilla.gnome.org/show_bug.cgi?id=601731
It's not 2011 anymore, and we shouldn't randomly build one of 10.000
different combinations of X11 backends (I counted the possibilities) but
exactly the one we expect people to use.
Instead, ensure that sassc is made madatory on git builds (because
it is, we don't ship CSS files anymore) and not even looked for in
release builds (because do ship CSS files there).
We don't want people to build Vulkan support when they just want to get
GTK built.
This is in particular true for GTK as a CI subproject or for people
using jhbuild.
Worse, just having Vulkan support compiled in tends to cause crashes
in the Inspector, even if you are not using it.
GTK supports webm playback, which means a backend should always be
compiled.
The ffmpeg backend however is incomplete (no audio) and as such, we
don't want people to end up with it accidentally.
Since we don't want to drag an entire gstreamer build into our ci
on MacOs or msvc, explicitly disable the gstreame media backend there.
Set all settings to their default values, so we
are less dependent on the environment to be set
up just right. In particular, this fixes animations
being disabled when we happen to run in a vm.
Make _gdk_win32_display_get_monitor_scale_factor() less complex, by:
* Drop the preceding underscore.
* Dropping an unused parameter.
* Using a GdkSurface instead of a HWND, as the HWND that we pass into
this function might have been taken from a GdkSurface, which are now
always created with CS_OWNDC. This means if a GdkSurface was passed
in, we ensure that we only acquire the DC from the HWND once, and do
not attempt to call ReleaseDC() on it.
* Store the HDC that we acquire from the GdkSurface's HWND into the
surface, and use that as the HDC we need for our GdkGLContext.
* Drop the gl_hwnd from GdkWin32Display, as that is really should be
stored in the GdkSurface.
* For functions that were updated, name GdkWin32Display variables as
display_win32 and GdkSurface variables as surface, to unify things.
* Stop calling ReleaseDC() on the HDC that we use for OpenGL, since
they were acquired from HWND's created with CS_OWNDC.