GDK defaults to asking for an OpenGL 3.2 Core Profile, but if we get a
legacy profile from the underlying windowing system, the OpenGL version
will be fixed to 3.0. If that happens, we need to set the legacy bit on
the GdkGLContext, since that bit will be used to determine the version
and type of GLSL shaders that will be used by application and toolkit
code alike.
(cherry picked from commit 31c05771e9)
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
Now that the use_es field is an int with a possible negative value, we
cannot use it its truth value directly; we need to check if it's a
positive value, instead.
(cherry picked from commit 8e85f55240)
Signed-off-by: Emmanuele Bassi <ebassi@gnome.org>
We've already set ->use_es correctly at context creation time, all this
can possibly do is change our mind about what kind of GL we're using.
Signed-off-by: Adam Jackson <ajax@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=773180
Calling gdk_gl_context_realize() should always result in a valid result,
so we need to provide a default implementation, to avoid a call to a
NULL function pointer.
When uploading a Cairo image surface to a GL texture we cannot use
GL_BGRA and GL_UNSIGNED_INT_8_8_8_8_REV on OpenGL ES, as they do not
exist in the core spec.
On some platforms we can ask the GL context machinery to create a GLES
context, instead of a GL one.
In order to ask for a GLES context at GdkGLContext realization time, we
use a bit field like we do for forward compatible, or debug contexts.
The 'use-es' bit also changes the way we select a default version,
because OpenGL and OpenGLES versions differ.
https://bugzilla.gnome.org/show_bug.cgi?id=743746
The g_print documentation explicitly says not to do this, since
g_print is meant to be redirected by applications. Instead use
g_message for logging that can be triggered via GTK_DEBUG.
We want to have the ability to fall back to legacy GL contexts when
creating them. In order to do so, we need to store the legacy bit on the
GdkGLContext, as well as being able to query it.
Setting the legacy bit from outside GDK is not possible; we cannot
create GL contexts in 3.2 core profile *and* compatibility modes at the
same time, and if we allowed users to select the legacy mode themselves,
it would break the creation of the GdkWindow's paint GL context.
What we do allow is falling back to legacy GL context if the platform
does not support 3.2 core profiles — for instance, on older GPUs or
inside virtualized environments.
We are also going to use the legacy bit internally, to choose which GL
API we can use when drawing GL content.
https://bugzilla.gnome.org/show_bug.cgi?id=756142
If the user requests a version less than 3.2 the version is forced to 3.2.
Previous checking code have an inconsistent behavior depending on which
minor version number was specified. This is avoided now by temporarily
converting the major/minor pair into a single integer.
https://bugzilla.gnome.org/show_bug.cgi?id=744288
The existence of OpenGL implementations that do not provide the full
core profile compatibility because of reasons beyond the technical, like
llvmpipe not implementing floating point buffers, makes the existence of
GdkGLProfile and documenting the fact that we use core profiles a bit
harder.
Since we do not have any existing profile except the default, we can
remove the GdkGLProfile and its related API from GDK and GTK+, and sweep
the whole thing under the carpet, while we wait for an extension that
lets us ask for the most compatible profile possible.
https://bugzilla.gnome.org/show_bug.cgi?id=744407
Store the OpenGL version when we first do the extensions check; this
allows client code to check the available GL version without requiring a
call to gdk_gl_context_make_current() and epoxy_gl_version().
Now that we have a two-stages GL context creation sequence, we can move
the profile to a pre-realize option, like the debug and forward
compatibility bits, or the GL version to use.
We simply don't want to care about legacy OpenGL.
All supported platforms also have support for OpenGL ≥ 3.2; it would
complicate the internal code; and would force us to use legacy GL
contexts internally if the first context created by the user is a legacy
GL context, and disable creation of core-3.2 contexts after that.
We will need to fix all our code examples to use the Core 3.2 profile.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
Users of the GdkGLContext API should be allowed to set properties on the
shim GdkGLContext instance prior to realization, so that the
backend-specific implementation can use the value of those properties
when creating the windowing system specific resources.
The main three options are:
• a major/minor version tuple, to request a specific GL version
• a debug bit, to request a "debug context", which provides additional
validation and run time checking
• a forward compatibility bit, to request a context that does not
have deprecated functionality
See also:
- https://www.opengl.org/registry/specs/ARB/glx_create_context.txthttps://bugzilla.gnome.org/show_bug.cgi?id=741946
One of the major requests by OpenGL users has been the ability to
specify settings when creating a GL context, like the version to use
or whether the debug support should be enabled.
We have a couple of requirements in terms of API:
• avoid, if at all possible, the "C arrays of integers with
attribute, value pairs", which are hard to write and hard
to bind in non-C languages.
• allow failing in a recoverable way.
• do not make the GL context creation API a mess of arguments.
Looking at prior art, it seems that a common pattern is to split the
construction phase in two:
• a first phase that creates a GL context wrapper object and
does preliminary checks on the environment.
• a second phase that creates the backend-specific GL object.
We adopted a similar pattern:
• gdk_window_create_gl_context() creates a GdkGLContext
• gdk_gl_context_realize() creates the underlying resources
Calling gdk_gl_context_make_current() also realizes the context, so
simple GL users do not need to care. Advanced users will want to
call gdk_window_create_gl_context(), set up the optional requirements,
and then call gdk_gl_context_realize(). If either of these two steps
fails, it's possible to recover by changing the requirements, or simply
creating a new GdkGLContext instance.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
- Specifically request GL version when creating context. Just specifying core
profile bit results in the requested version defaulting to 1.0 which causes
the core profile bit to be ignored and an arbitrary compatability context to be
returned.
- Fix GL painting by removing GL calls that have been depricated by the 3.2 core
profile.
- Additionally remove glInvalidateFramebuffer() call, it is not supported by 3.2
core.
https://bugzilla.gnome.org/show_bug.cgi?id=742953
As the alignments, strides and image formats may be different across
platforms, make the texture upload a vfunc to allow backends to override
the GL commands for uploading textures for the software implementation for
gdk_gl_texture_from_surface(), if necessary.
Suggested by Alex to avoid copying non-trivial portions of code which would
then add maintainenace burden.
https://bugzilla.gnome.org/show_bug.cgi?id=740795
If buffer age is undefined and the updated area is not the whole
window then we use bit-blits instead of swap-buffers to end the
frame.
This allows us to not repaint the entire window unnecessarily if
buffer_age is not supported, like e.g. with DRI2.
We need to use this in the code path where we make the context
non-current during destroy, because at that point the window
could be destroyed and gdk_window_get_display() would return
NULL.
This is not really needed. The gl context is totally tied to the
window it is created from by virtue of sharing the context with the
paint context of that window and that context always has the visual
of the window (which we already can get).
Also, all user visible contexts are essentially offscreen contexts, so
a visual doesn't make sense for them. They only use FBOs which have
whatever format that the users sets up.
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.