Instead of going via GdkVisual, doing a preselection and letting the GL
initialization improve it, let the GL initialization pick an X Visual
directly using X Visual code directly.
The code should select the same visuals as before as it tries to apply
the same logic, but it's a rewrite, so I expect I messed something up.
1. We're using EGL most of the time anyway, so if we wanted to cache
things, we'd need to port it there.
2. Our GL handling is massively configurable, so determining when to use
the cache and when not is a challenge.
3. It makes startup nondeterministic and depend on whether a GTK4 app
has previously been started on this display and nobody thinks about
that when debugging.
4. The only benefit of the caching is delaying GL initialization - which
made sense in GTK3 where almost no app used GL but doesn't make sense
in GTK4 where almost every app uses GL.
So unless I find a big benefit to reintroducing it, this cache will be
gone for good.
We don't want to bind ourselves to GTK3 - both because we don't want to
accidentally cause bugs in a different codebase and because we want to
deviate from it.
While doing so, also store visuals as visuals and not as integers.
And only store one Visual because GTK4 only uses one visual.
And then remove the code that is leftover for dealing with the
compatibility Visual for GTK3.
PS: I'm kinda proud of my STRINGIFY_WITHOUT_BRACKETS hack.
Check that we are indeed running inside an Xorg server before enabling
the workaround.
XWayland or other nested X servers deadlock when that workaround is
applied.
If we want to add an EGL implementation for the X11 backend, we are
going to need to move the GLX bits into their own class. The first step
is to declare GdkX11GLContext as an abstract type, and then subclass it
into a GdkX11GLContextGLX type, which includes the whole GLX
implementation.