If we use GDK_GL_PROFILE_3_2_CORE we are asking for a core profile
according to the GLX_ARB_create_context_profile extension. For that,
we pass the GLX_CONTEXT_CORE_PROFILE_BIT_ARB value for the
GLX_CONTEXT_PROFILE_MASK_ARB attribute.
The specification for the extension says that:
If the requested OpenGL version is less than 3.2,
GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality
of the context is determined solely by the requested version.
Since we're asking for a core profile, we assume a GL version greater
than or equal to 3.2; thus, we don't need to specify the
GLX_CONTEXT_MAJOR_VERSION_ARB or the GLX_CONTEXT_MINOR_VERSION_ARB
attributes, and instead just rely on whatever version GLX gives us.
This seems to work around a strange issue in Mesa; if we ask for a core
profile and any version > 3.0, we get broken rendering on any shared
context we create.
We've observed hangs of mutter when it initializes GTK+, which
are caused by initializing GL, which in turn makes xwayland
call back into mutter. With this change, mutter should just
disable GL support in GDK, and things will work.
The ICCCM says:
If the specified property is None , the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
This is required for the X backend GL integration. If the
window has a height that is not a multiple of the window scale
we can't properly do the y coordinate flipping that GL needs.
Other backends can ignore this and use the default implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Rather than just rounding down the position *and* the size separately
we correctly calculate a rectangle in scaled window coords that fully
covers the real window size. This really only makes a difference
when the window size/position isn't a multiple of the window scale.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Keep track of the exact size of X windows in underlying pixels; we
generally use the scaled size instead, but to properly handle the GL
viewport for windows that aren't a multiple of window_scale,
we need to know the real size.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Although we specify a resize increment to try and get a size that is
a multiple of the window scale, maximization typically wins
over the resize increment, so the window might be odd sized.
Round *up* in this case, rather than down, since it's better to
truncate a line or two at the bottom and right of the window rather
than have a line or two that we don't know what to do with.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
If buffer age is undefined and the updated area is not the whole
window then we use bit-blits instead of swap-buffers to end the
frame.
This allows us to not repaint the entire window unnecessarily if
buffer_age is not supported, like e.g. with DRI2.
Commit afd9709aff made us keep impl window
cairo surfaces around across changes of window scale. But the
window scale setter forgot to update the size and scale of the
surface. The effect of this was that toggling the window scale
from 1 to 2 in the inspector was not causing the window to draw
at twice the size, although the X window was made twice as big,
and input was scaled too. Fix this by updating the surface when
the window scale changes.
We need to use this in the code path where we make the context
non-current during destroy, because at that point the window
could be destroyed and gdk_window_get_display() would return
NULL.
This moves the code related to the frame sync code into
the is_attached check, which means we don't have to ever
run this when making non-window-paint contexts current.
This is a minior speed thing, but the main advantage
is that it makes making a non-paint context current
threadsafe.
This is not really needed. The gl context is totally tied to the
window it is created from by virtue of sharing the context with the
paint context of that window and that context always has the visual
of the window (which we already can get).
Also, all user visible contexts are essentially offscreen contexts, so
a visual doesn't make sense for them. They only use FBOs which have
whatever format that the users sets up.
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.
We used to have a weak ref to the cairo surface and it was keep
alive by the references in the normal windows, but that reference
was removed by d48adf9cee, causing
us to constantly create and destroy the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=738648
We want to create windows with the default visuals such that we then
have the right visual for GLX when we want to create the paint GL
context for the window.
For instance, (in bug 738670) the default rgba visual we picked for the
NVidia driver had an alpha size of 0 which gave us a BadMatch when later
trying to initialize a gl context on it with a alpha FBConfig.
Instead of just picking what the Xserver likes for the default, and just
picking the first rgba visual we now actually call into GLX to pick
an appropriate visual.
The visuals are typically sorted by some sort of "most useful first"
order. And picking the last one is likely to give us the weirdest
matching glx visual.
Commits 314b6abbe8 and eb9223c008 were ignoring
the fact that the code where found is set to 1 was modifying
col - which was an ok thing to do when that part of the code
was still breaking out of the loop, but it is no longer doing
that (since 2003 !). Fix things up by storing the final col
value in a separate variable and using that after the loop.
https://bugzilla.gnome.org/show_bug.cgi?id=738886