We simply don't want to care about legacy OpenGL.
All supported platforms also have support for OpenGL ≥ 3.2; it would
complicate the internal code; and would force us to use legacy GL
contexts internally if the first context created by the user is a legacy
GL context, and disable creation of core-3.2 contexts after that.
We will need to fix all our code examples to use the Core 3.2 profile.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
Like what is being done in the X11 and Wayland backends, create the
GdkWin32GLContext in 2 steps, where we only create the actual WGL context
in _gdk_win32_gl_context_realize().
https://bugzilla.gnome.org/show_bug.cgi?id=741946
Users of the GdkGLContext API should be allowed to set properties on the
shim GdkGLContext instance prior to realization, so that the
backend-specific implementation can use the value of those properties
when creating the windowing system specific resources.
The main three options are:
• a major/minor version tuple, to request a specific GL version
• a debug bit, to request a "debug context", which provides additional
validation and run time checking
• a forward compatibility bit, to request a context that does not
have deprecated functionality
See also:
- https://www.opengl.org/registry/specs/ARB/glx_create_context.txthttps://bugzilla.gnome.org/show_bug.cgi?id=741946
One of the major requests by OpenGL users has been the ability to
specify settings when creating a GL context, like the version to use
or whether the debug support should be enabled.
We have a couple of requirements in terms of API:
• avoid, if at all possible, the "C arrays of integers with
attribute, value pairs", which are hard to write and hard
to bind in non-C languages.
• allow failing in a recoverable way.
• do not make the GL context creation API a mess of arguments.
Looking at prior art, it seems that a common pattern is to split the
construction phase in two:
• a first phase that creates a GL context wrapper object and
does preliminary checks on the environment.
• a second phase that creates the backend-specific GL object.
We adopted a similar pattern:
• gdk_window_create_gl_context() creates a GdkGLContext
• gdk_gl_context_realize() creates the underlying resources
Calling gdk_gl_context_make_current() also realizes the context, so
simple GL users do not need to care. Advanced users will want to
call gdk_window_create_gl_context(), set up the optional requirements,
and then call gdk_gl_context_realize(). If either of these two steps
fails, it's possible to recover by changing the requirements, or simply
creating a new GdkGLContext instance.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
The default ->upload_texture() works also for Windows since commit 27cf0fa,
as some of the problems described in 742953 also applied for GL core
contexts on Windows as well before 27cf0fa. Clean up the GDK-Win32 code a
little bit as a result.
We should remove the mir and cairo surface before rendering the
transient_for, which will regenerate the cairo surface anyways.
Otherwise, we end up releasing both, when we only really want to get rid
of the mir surface.
Mouse over a parent menu[bar] didn't work while the menu was open.
The fix was to correct the behaviour of pointer crossing events so that
the pointer appears to be only inside one window at a time.
See: http://tronche.com/gui/x/xlib/events/window-entry-exit/normal.html
We need this because it fixes menu activation. The menu activation code
looks at the time between events to determine if mouse clicks happen too
quickly.
Commit ff256956b2 introduced a frame_clock_events_paused
flag, but only ever set it to TRUE, instead of unsetting it when
events are resumed. This was leading to assertion failures in
_gdk_display_unpause_events().
If we are disconnecting from a frame clock that has paused event
processing and hasn't issued a resume yet make sure we resume the
events or they will stay blocked forever.
https://bugzilla.gnome.org/show_bug.cgi?id=742636
This function is given a barely setup GdkEvent, so the GdkDevice field
is still unset, causing warnings and misbehaviors when the position
is queried for it.
Given that the wintab GTK+ code seems to rely somewhat hard on the wintab
device managing the pointer cursor, query the pointer position from the
pointer itself.
https://bugzilla.gnome.org/show_bug.cgi?id=743330
The window used NULL as a parent window, which defaults internally to
using the root window of the default screen. But at the time wintab is
initialized, there is no default display/screen yet.
Fix this by retrieving this information from the given GdkDeviceManager,
so we don't have to wait for the display to be in place before
initialization.
https://bugzilla.gnome.org/show_bug.cgi?id=743330
In some layouts this inconsistency results in crashes in
gdk_gl_texture_from_surface() since it uses gdk_gl_context_get_window() but
the returned window is not the same as the one that is being painted so
"window->current_paint.surface" is NULL. I saw this problem when packing a
GdkGLArea into a GtkPaned.
https://bugzilla.gnome.org/show_bug.cgi?id=743146
- Specifically request GL version when creating context. Just specifying core
profile bit results in the requested version defaulting to 1.0 which causes
the core profile bit to be ignored and an arbitrary compatability context to be
returned.
- Fix GL painting by removing GL calls that have been depricated by the 3.2 core
profile.
- Additionally remove glInvalidateFramebuffer() call, it is not supported by 3.2
core.
https://bugzilla.gnome.org/show_bug.cgi?id=742953
The ICCCM says:
If the specified property is None, the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
The previous fix for this issue in 732af31424 was incomplete.
It seems that posix_fallocate gives an ENODEV error when
called on an fd opened with shm_open on freebsd. Fix up
the error check to only trigger if we get ENOSPC.
https://bugzilla.gnome.org/show_bug.cgi?id=742980
If we use GDK_GL_PROFILE_3_2_CORE we are asking for a core profile
according to the GLX_ARB_create_context_profile extension. For that,
we pass the GLX_CONTEXT_CORE_PROFILE_BIT_ARB value for the
GLX_CONTEXT_PROFILE_MASK_ARB attribute.
The specification for the extension says that:
If the requested OpenGL version is less than 3.2,
GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality
of the context is determined solely by the requested version.
Since we're asking for a core profile, we assume a GL version greater
than or equal to 3.2; thus, we don't need to specify the
GLX_CONTEXT_MAJOR_VERSION_ARB or the GLX_CONTEXT_MINOR_VERSION_ARB
attributes, and instead just rely on whatever version GLX gives us.
This seems to work around a strange issue in Mesa; if we ask for a core
profile and any version > 3.0, we get broken rendering on any shared
context we create.
Sending backingScaleFactor to a NULL NSWindow will silently give the
value 0 for the scale factor, causing insidious divide-by-zero bugs down
the line. This checks if the NSWindow is NULL first, as seems to happen
throughout the rest of the file.
Note that I don't have a hi-DPI OS X machine to test this on, though.
https://bugzilla.gnome.org/show_bug.cgi?id=738338
We've observed hangs of mutter when it initializes GTK+, which
are caused by initializing GL, which in turn makes xwayland
call back into mutter. With this change, mutter should just
disable GL support in GDK, and things will work.
This adds support for OpenGL to the GDK Windows backend using the WGL API
calls, which enables programs that uses the GTK+ GLArea widgets to work on
Windows as well.
This also adds a simple utility function to query for the version of OpenGL
that is supported by the Windows system, like the one provided by the X11
backend.
Many thanks to Alex (and Emmanuele, who started the OpenGL integration in
GTK+) who offered advice and help along the way, as well as the X11 and
Wayland backend for this work to refer to and to model upon.
https://bugzilla.gnome.org/show_bug.cgi?id=740795
As the alignments, strides and image formats may be different across
platforms, make the texture upload a vfunc to allow backends to override
the GL commands for uploading textures for the software implementation for
gdk_gl_texture_from_surface(), if necessary.
Suggested by Alex to avoid copying non-trivial portions of code which would
then add maintainenace burden.
https://bugzilla.gnome.org/show_bug.cgi?id=740795
We can't combine multiple draws into one for the software fallback,
because each quad has a different texture. And we generally don't
want to make a larger single texture because then we would have
to upload more data.
The ICCCM says:
If the specified property is None , the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
NULL-plus-something could be seen by the compiler to attempt to do
arithmetic with void *, which is a GCCism. Instead, do the math normally
and cast the results as a void *.
https://bugzilla.gnome.org/show_bug.cgi?id=740605
Use xdg_surface_set_window_geometry() to tell the compositor about the
shadow widths, this makes some gnome-shell/mutter features (edge resistance,
frames around windows in the overview, side maximization, ...) work alright
with GTK+.
In order to add this, some other places in gdkwindow-wayland had to gain
some knowledge about margins:
- xdg_surface_configure() now syncs the shadow after applying the state,
and gdk_wayland_window_set_shadow_width() possibly reconfigures the
window in order to preserve window geometry. This is necessary to keep
shadows in sync with state/geometry changes, as this does not happen
all at once.
- xdg_popups relative to an xdg_surface are shown relative to buffer
coordinates, so the left/top margins must be added there.
https://bugzilla.gnome.org/show_bug.cgi?id=736742
This requires us to use GL_TRIANGLES and six verts per quad instead
of four, which makes me think it might not be worth it on
well-optimized GL drivers. However, from talking to some driver
developers about it, the GL_TRIANGLES should be faster, since this
means that there's one giant contiguous buffer instead of many small
buffers.
If we were really rendering a lot of quads, I'd use an element buffer
and GL_PRIMITIVE_RESTART, but we're really not ever rendering that
many quads, and the setup cost for that would just be too annoying.