The setting of the the surface scale even when the surface is not
created from a surface was introduced due to a crash when getting the
buffers when dividing by the scale. The only reason I can see this is
that we get the buffer from a non-existing surface when the wl_cursor
has not yet been set.
Instead, use the name field to avoid trying to use the non-existing
surface, effectively avoiding the division-by-zero that way.
https://bugzilla.gnome.org/show_bug.cgi?id=746141
The gtk-shell Wayland protocol extension is not meant to be backward
compatible right now, so avoid binding to any version that is not the
one supported.
https://bugzilla.gnome.org/show_bug.cgi?id=745721
gdk_window_ensure_native() can end up with a NULL parent pointer, which
it passes to find_native_parent_above()…but that expects a non-NULL
parent.
Found with scan-build.
https://bugzilla.gnome.org/show_bug.cgi?id=712760
When configuring Gtk+ with --disable-xkb, the build fails because of an
undefined reference to get_xkb().
This patch fixes this issue.
Signed-off-by: Eric Le Bihan <eric.le.bihan.dev@free.fr>
https://bugzilla.gnome.org/show_bug.cgi?id=739070
If the user requests a version less than 3.2 the version is forced to 3.2.
Previous checking code have an inconsistent behavior depending on which
minor version number was specified. This is avoided now by temporarily
converting the major/minor pair into a single integer.
https://bugzilla.gnome.org/show_bug.cgi?id=744288
And use these for the missing axes if the valuator mask is incomplete.
This used to work fine on tablets because the Wacom driver ensures all
valuators are sent, which is not true if using the evdev driver.
https://bugzilla.gnome.org/show_bug.cgi?id=703610
When a window is hidden, its surface and all its roles are destroyed,
if this happens when we already issued a wl_surface_commit and are
awaiting for a frame callback, the clock will remain frozen for the
next time the window is shown.
To avoid this, keep track of the wl_surface_frame() calls issued,
and ensure the clock is thawed after hiding. If we happen to receive
the frame callback, it is just ignored.
https://bugzilla.gnome.org/show_bug.cgi?id=743427
We were just throwing the request away if the app asks to
fullscreen or maximize a window before it has been mapped.
This is something the GdkWindow API explicitly supports,
so make it work by saving the state until the surface exists.
This fixes things under weston. There are bugs in mutter
that keep this from working correctly with gnome-shell.
https://bugzilla.gnome.org/show_bug.cgi?id=745303
When the Wayland compositor vanishes, all applications connected will
receive a SIGPIPE as soon as they try to use wl_display_dispatch().
Do not use g_error() to terminate the applications when this occurs,
g_error() means an error in the application while here it's not truly
the case.
Use g_warning() and exit() instead.
Signed-off-by: Olivier Fourdan <ofourdan@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=745289
Before this patch, we'd always allocate a full size SHM buffer via
the wl_shm_pool, even though it would never be used. Instead allocate a
logical 1x1 cairo image surface.
https://bugzilla.gnome.org/show_bug.cgi?id=745076
In order to support window scales for EGL windows, resize the
wl_egl_window to the window dimension multiplied with the window scale,
just as with SHM window buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=745076
When the preferred surface scale changes, for example when entering a
wl_output with a higher scale than any previous entered output, recreate
the shm surface and redraw the window content with the new window scale.
Before this patch, the internal scale would be changed, but the shm
surface would not be recreated given the new scale, i.e. we'd attach a
buffer for a different scale than wl_surface.set_scale specified.
https://bugzilla.gnome.org/show_bug.cgi?id=745076
If the compositor is too old for handling surface buffer scales, never
tyr to set change it. This will effectively always leave it to its
initial state, i.e. 1.
https://bugzilla.gnome.org/show_bug.cgi?id=745076
Don't try to paint onto an error surface. This happens for example when
gdk_cairo_set_source_pixbuf() is called with a pixbuf that is too big
for Cairo to handle.
Spotted by Christian Boxdörfer
First, attributes can be NULL (which is always the case when calling
gdk_window_ensure_native) so do not unconditionally dereference it.
Then the window_type should be taken directly from the GdkWindow as
in other backends (such as the X11 one for example).
https://bugzilla.gnome.org/show_bug.cgi?id=744942
Also try and clarify a few things about event propagation. Move
input-handling.xml into gtk-doc’s expand_content_files variable so it
automatically links to widget documentation. Add links from
gtk_widget_add_events() and friends to the new documentation.
https://bugzilla.gnome.org/show_bug.cgi?id=744054
It will be useless to check the source window on the destination side,
it's at the moment always NULL. Fetch the display from the device instead,
which will be set for every GdkDragContext.
Some compositors might not offer wl_seat 4 resulting in GTK+ clients not
working on that compositor.
wl_seat 4 introduces keyboard repeat information, but when that information
is missing it is retrieved from settings, hence there's no reason to
require wl_seat 4.
This patch was tested against QtCompositor (5.5, dev branch)
and Weston 1.6.1.
Reviewed-by: Daniel Stone <daniels@collabora.com>
https://bugzilla.gnome.org/show_bug.cgi?id=744172
The existence of OpenGL implementations that do not provide the full
core profile compatibility because of reasons beyond the technical, like
llvmpipe not implementing floating point buffers, makes the existence of
GdkGLProfile and documenting the fact that we use core profiles a bit
harder.
Since we do not have any existing profile except the default, we can
remove the GdkGLProfile and its related API from GDK and GTK+, and sweep
the whole thing under the carpet, while we wait for an extension that
lets us ask for the most compatible profile possible.
https://bugzilla.gnome.org/show_bug.cgi?id=744407
Store the OpenGL version when we first do the extensions check; this
allows client code to check the available GL version without requiring a
call to gdk_gl_context_make_current() and epoxy_gl_version().
When using GDK_GL_PROFILE_3_2_CORE, we are not only specifying that the
GDK should create a core profile; we are also specifying that the
minimum required version of OpenGL is set to 3.2.
We should also specify that the GDK_GL_PROFILE_DEFAULT profile is an
alias for GDK_GL_PROFILE_3_2_CORE.
Now that we have a two-stages GL context creation sequence, we can move
the profile to a pre-realize option, like the debug and forward
compatibility bits, or the GL version to use.
Emit an error if the profile is different.
This is a follow-up commit to commits cc45e82 (x11/gl: Ensure we use the
3.2 core profile) and 2d9081d (wayland/gl: Ensure we use the 3.2 core
profile), so that we do the same on GDK-Win32. Update variable names and
comments so that the code is clearer to people, as we still need a
temporary legacy WGL context first before we can use
wglCreateContextAttribsARB() to create a WGL core (3.2+) context.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
Using version 1.20 with a forward compatible 3.2 core context is incorrect
since OpenGL 3.2 deprecates shader version 1.20 (See section E.2). The latest
fglrx drivers will not compile these shaders.
https://bugzilla.gnome.org/show_bug.cgi?id=744203
We simply don't want to care about legacy OpenGL.
All supported platforms also have support for OpenGL ≥ 3.2; it would
complicate the internal code; and would force us to use legacy GL
contexts internally if the first context created by the user is a legacy
GL context, and disable creation of core-3.2 contexts after that.
We will need to fix all our code examples to use the Core 3.2 profile.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
Like what is being done in the X11 and Wayland backends, create the
GdkWin32GLContext in 2 steps, where we only create the actual WGL context
in _gdk_win32_gl_context_realize().
https://bugzilla.gnome.org/show_bug.cgi?id=741946
Users of the GdkGLContext API should be allowed to set properties on the
shim GdkGLContext instance prior to realization, so that the
backend-specific implementation can use the value of those properties
when creating the windowing system specific resources.
The main three options are:
• a major/minor version tuple, to request a specific GL version
• a debug bit, to request a "debug context", which provides additional
validation and run time checking
• a forward compatibility bit, to request a context that does not
have deprecated functionality
See also:
- https://www.opengl.org/registry/specs/ARB/glx_create_context.txthttps://bugzilla.gnome.org/show_bug.cgi?id=741946
One of the major requests by OpenGL users has been the ability to
specify settings when creating a GL context, like the version to use
or whether the debug support should be enabled.
We have a couple of requirements in terms of API:
• avoid, if at all possible, the "C arrays of integers with
attribute, value pairs", which are hard to write and hard
to bind in non-C languages.
• allow failing in a recoverable way.
• do not make the GL context creation API a mess of arguments.
Looking at prior art, it seems that a common pattern is to split the
construction phase in two:
• a first phase that creates a GL context wrapper object and
does preliminary checks on the environment.
• a second phase that creates the backend-specific GL object.
We adopted a similar pattern:
• gdk_window_create_gl_context() creates a GdkGLContext
• gdk_gl_context_realize() creates the underlying resources
Calling gdk_gl_context_make_current() also realizes the context, so
simple GL users do not need to care. Advanced users will want to
call gdk_window_create_gl_context(), set up the optional requirements,
and then call gdk_gl_context_realize(). If either of these two steps
fails, it's possible to recover by changing the requirements, or simply
creating a new GdkGLContext instance.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
The default ->upload_texture() works also for Windows since commit 27cf0fa,
as some of the problems described in 742953 also applied for GL core
contexts on Windows as well before 27cf0fa. Clean up the GDK-Win32 code a
little bit as a result.
We should remove the mir and cairo surface before rendering the
transient_for, which will regenerate the cairo surface anyways.
Otherwise, we end up releasing both, when we only really want to get rid
of the mir surface.
Mouse over a parent menu[bar] didn't work while the menu was open.
The fix was to correct the behaviour of pointer crossing events so that
the pointer appears to be only inside one window at a time.
See: http://tronche.com/gui/x/xlib/events/window-entry-exit/normal.html
We need this because it fixes menu activation. The menu activation code
looks at the time between events to determine if mouse clicks happen too
quickly.
Commit ff256956b2 introduced a frame_clock_events_paused
flag, but only ever set it to TRUE, instead of unsetting it when
events are resumed. This was leading to assertion failures in
_gdk_display_unpause_events().
If we are disconnecting from a frame clock that has paused event
processing and hasn't issued a resume yet make sure we resume the
events or they will stay blocked forever.
https://bugzilla.gnome.org/show_bug.cgi?id=742636
This function is given a barely setup GdkEvent, so the GdkDevice field
is still unset, causing warnings and misbehaviors when the position
is queried for it.
Given that the wintab GTK+ code seems to rely somewhat hard on the wintab
device managing the pointer cursor, query the pointer position from the
pointer itself.
https://bugzilla.gnome.org/show_bug.cgi?id=743330
The window used NULL as a parent window, which defaults internally to
using the root window of the default screen. But at the time wintab is
initialized, there is no default display/screen yet.
Fix this by retrieving this information from the given GdkDeviceManager,
so we don't have to wait for the display to be in place before
initialization.
https://bugzilla.gnome.org/show_bug.cgi?id=743330
In some layouts this inconsistency results in crashes in
gdk_gl_texture_from_surface() since it uses gdk_gl_context_get_window() but
the returned window is not the same as the one that is being painted so
"window->current_paint.surface" is NULL. I saw this problem when packing a
GdkGLArea into a GtkPaned.
https://bugzilla.gnome.org/show_bug.cgi?id=743146
- Specifically request GL version when creating context. Just specifying core
profile bit results in the requested version defaulting to 1.0 which causes
the core profile bit to be ignored and an arbitrary compatability context to be
returned.
- Fix GL painting by removing GL calls that have been depricated by the 3.2 core
profile.
- Additionally remove glInvalidateFramebuffer() call, it is not supported by 3.2
core.
https://bugzilla.gnome.org/show_bug.cgi?id=742953
The ICCCM says:
If the specified property is None, the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
The previous fix for this issue in 732af31424 was incomplete.
It seems that posix_fallocate gives an ENODEV error when
called on an fd opened with shm_open on freebsd. Fix up
the error check to only trigger if we get ENOSPC.
https://bugzilla.gnome.org/show_bug.cgi?id=742980
If we use GDK_GL_PROFILE_3_2_CORE we are asking for a core profile
according to the GLX_ARB_create_context_profile extension. For that,
we pass the GLX_CONTEXT_CORE_PROFILE_BIT_ARB value for the
GLX_CONTEXT_PROFILE_MASK_ARB attribute.
The specification for the extension says that:
If the requested OpenGL version is less than 3.2,
GLX_CONTEXT_PROFILE_MASK_ARB is ignored and the functionality
of the context is determined solely by the requested version.
Since we're asking for a core profile, we assume a GL version greater
than or equal to 3.2; thus, we don't need to specify the
GLX_CONTEXT_MAJOR_VERSION_ARB or the GLX_CONTEXT_MINOR_VERSION_ARB
attributes, and instead just rely on whatever version GLX gives us.
This seems to work around a strange issue in Mesa; if we ask for a core
profile and any version > 3.0, we get broken rendering on any shared
context we create.
Sending backingScaleFactor to a NULL NSWindow will silently give the
value 0 for the scale factor, causing insidious divide-by-zero bugs down
the line. This checks if the NSWindow is NULL first, as seems to happen
throughout the rest of the file.
Note that I don't have a hi-DPI OS X machine to test this on, though.
https://bugzilla.gnome.org/show_bug.cgi?id=738338
We've observed hangs of mutter when it initializes GTK+, which
are caused by initializing GL, which in turn makes xwayland
call back into mutter. With this change, mutter should just
disable GL support in GDK, and things will work.
This adds support for OpenGL to the GDK Windows backend using the WGL API
calls, which enables programs that uses the GTK+ GLArea widgets to work on
Windows as well.
This also adds a simple utility function to query for the version of OpenGL
that is supported by the Windows system, like the one provided by the X11
backend.
Many thanks to Alex (and Emmanuele, who started the OpenGL integration in
GTK+) who offered advice and help along the way, as well as the X11 and
Wayland backend for this work to refer to and to model upon.
https://bugzilla.gnome.org/show_bug.cgi?id=740795
As the alignments, strides and image formats may be different across
platforms, make the texture upload a vfunc to allow backends to override
the GL commands for uploading textures for the software implementation for
gdk_gl_texture_from_surface(), if necessary.
Suggested by Alex to avoid copying non-trivial portions of code which would
then add maintainenace burden.
https://bugzilla.gnome.org/show_bug.cgi?id=740795
We can't combine multiple draws into one for the software fallback,
because each quad has a different texture. And we generally don't
want to make a larger single texture because then we would have
to upload more data.
The ICCCM says:
If the specified property is None , the requestor is an obsolete client.
Owners are encouraged to support these clients by using the specified
target atom as the property name to be used for the reply.
Lets do that, instead of crashing.
https://bugzilla.gnome.org/show_bug.cgi?id=740613
NULL-plus-something could be seen by the compiler to attempt to do
arithmetic with void *, which is a GCCism. Instead, do the math normally
and cast the results as a void *.
https://bugzilla.gnome.org/show_bug.cgi?id=740605
Use xdg_surface_set_window_geometry() to tell the compositor about the
shadow widths, this makes some gnome-shell/mutter features (edge resistance,
frames around windows in the overview, side maximization, ...) work alright
with GTK+.
In order to add this, some other places in gdkwindow-wayland had to gain
some knowledge about margins:
- xdg_surface_configure() now syncs the shadow after applying the state,
and gdk_wayland_window_set_shadow_width() possibly reconfigures the
window in order to preserve window geometry. This is necessary to keep
shadows in sync with state/geometry changes, as this does not happen
all at once.
- xdg_popups relative to an xdg_surface are shown relative to buffer
coordinates, so the left/top margins must be added there.
https://bugzilla.gnome.org/show_bug.cgi?id=736742
This requires us to use GL_TRIANGLES and six verts per quad instead
of four, which makes me think it might not be worth it on
well-optimized GL drivers. However, from talking to some driver
developers about it, the GL_TRIANGLES should be faster, since this
means that there's one giant contiguous buffer instead of many small
buffers.
If we were really rendering a lot of quads, I'd use an element buffer
and GL_PRIMITIVE_RESTART, but we're really not ever rendering that
many quads, and the setup cost for that would just be too annoying.
It's unused. At the same time, rename "begin_paint_region" to
"begin_paint". This will help us clean up how GDK painting works
in the future to allow more creative use of double-buffering.
This is needed in the edge case where the X11 backend rounded the actual
size, and the GL flipping really needs the correct window height to
do proper Y coordinate flipping.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
This is required for the X backend GL integration. If the
window has a height that is not a multiple of the window scale
we can't properly do the y coordinate flipping that GL needs.
Other backends can ignore this and use the default implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Rather than just rounding down the position *and* the size separately
we correctly calculate a rectangle in scaled window coords that fully
covers the real window size. This really only makes a difference
when the window size/position isn't a multiple of the window scale.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Keep track of the exact size of X windows in underlying pixels; we
generally use the scaled size instead, but to properly handle the GL
viewport for windows that aren't a multiple of window_scale,
we need to know the real size.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
Although we specify a resize increment to try and get a size that is
a multiple of the window scale, maximization typically wins
over the resize increment, so the window might be odd sized.
Round *up* in this case, rather than down, since it's better to
truncate a line or two at the bottom and right of the window rather
than have a line or two that we don't know what to do with.
https://bugzilla.gnome.org/show_bug.cgi?id=739750
The current way of exposing GDK API that should be considered internal
to GTK+ is to append a 'libgtk_only' suffix to the function name; this
is not really safe.
GLib has been using a slightly different approach: a private table of
function pointers, and a macro that allows accessing the desired symbol
inside that vtable.
We can copy the approach, and deprecate the 'libgtk_only' symbols in
lieu of outright removal.
https://bugzilla.gnome.org/show_bug.cgi?id=739781
Instead of possibly calling wl_surface_commit() out of
GdkFrameClock::after-paint, tick the transient parent clock so ::after-paint
can be eventually run.
This ensures that the subsurface coordinates (considered part of the state
of the parent) aren't committed untimely, and guaranteed to be orderly with
the wl_subsurface-relative state.
This is a gtk-side fix for https://bugzilla.gnome.org/show_bug.cgi?id=738887
cairo_region_copy(NULL) will effectively return an empty region, as this
function is always meant to return valid memory. This however inverts the
meaning of the NULL region and results in entirely non-clickable windows.
We need to export the symbols so they can be used in the
inspector, but we don't really want to make this supported
public API, so keep them out of installed headers.
Store the cursor name on the cursor (rather than always using its type).
Use this when setting a cursor on a surface.
The mir server will fallback to using standard cursors from the cursor
theme if the name used is not one of those defined by mir, which is more
or less what we want to happen here in case of creating a cursor by
name.
If buffer age is undefined and the updated area is not the whole
window then we use bit-blits instead of swap-buffers to end the
frame.
This allows us to not repaint the entire window unnecessarily if
buffer_age is not supported, like e.g. with DRI2.
This moves the GDK_ALWAYS_USE_GL env var to GDK_GL=always.
It also changes GDK_DEBUG=nogl to GDK_GL=disable, as GDK_DEBUG
is really only about debug loggin.
It also adds some completely new flags:
software-draw-gl:
Always use software fallback for drawing gl content to a cairo_t.
This disables the fastpaths that exist for drawing directly to
a window and instead reads back the pixels into a cairo image
surface.
software-draw-surface:
Always use software fallback for drawing cairo surfaces onto a
gl-using window. This disables e.g. texture-from-pixmap on X11.
software-draw:
Enables both the above.
The Mir backend was checking for button mask changes to generate the appropriate
GDK event. When Mir generates a touch event it has no button mask. In this case
we'll just generate a primary button event.
This was unnecessarily creating a framebuffer in the texture case,
and it was not properly setting up a framebuffer with the texture
as source in the software fallback w/ texture source case.
Commit afd9709aff made us keep impl window
cairo surfaces around across changes of window scale. But the
window scale setter forgot to update the size and scale of the
surface. The effect of this was that toggling the window scale
from 1 to 2 in the inspector was not causing the window to draw
at twice the size, although the X window was made twice as big,
and input was scaled too. Fix this by updating the surface when
the window scale changes.
We need to use this in the code path where we make the context
non-current during destroy, because at that point the window
could be destroyed and gdk_window_get_display() would return
NULL.
This moves the code related to the frame sync code into
the is_attached check, which means we don't have to ever
run this when making non-window-paint contexts current.
This is a minior speed thing, but the main advantage
is that it makes making a non-paint context current
threadsafe.
This is not really needed. The gl context is totally tied to the
window it is created from by virtue of sharing the context with the
paint context of that window and that context always has the visual
of the window (which we already can get).
Also, all user visible contexts are essentially offscreen contexts, so
a visual doesn't make sense for them. They only use FBOs which have
whatever format that the users sets up.
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.
We used to have a weak ref to the cairo surface and it was keep
alive by the references in the normal windows, but that reference
was removed by d48adf9cee, causing
us to constantly create and destroy the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=738648
We want to create windows with the default visuals such that we then
have the right visual for GLX when we want to create the paint GL
context for the window.
For instance, (in bug 738670) the default rgba visual we picked for the
NVidia driver had an alpha size of 0 which gave us a BadMatch when later
trying to initialize a gl context on it with a alpha FBConfig.
Instead of just picking what the Xserver likes for the default, and just
picking the first rgba visual we now actually call into GLX to pick
an appropriate visual.
The visuals are typically sorted by some sort of "most useful first"
order. And picking the last one is likely to give us the weirdest
matching glx visual.
It is not possible to successfully build GTK+ on OS X 10.6 and below
since NSFullScreenWindowMask is only available starting with 10.7. Add
ifdef guards around setStyleMask: in order to allow it to build on
earlier OS X releases.
https://bugzilla.gnome.org/show_bug.cgi?id=737561
Commits 314b6abbe8 and eb9223c008 were ignoring
the fact that the code where found is set to 1 was modifying
col - which was an ok thing to do when that part of the code
was still breaking out of the loop, but it is no longer doing
that (since 2003 !). Fix things up by storing the final col
value in a separate variable and using that after the loop.
https://bugzilla.gnome.org/show_bug.cgi?id=738886