We do not need to translate this on the CPU when we can instead push it
to the GPU in the same format and allow it to swizzle.
This fixes a huge number of memory allocations found while uploading the
GTK animation in widget-factory.
GLDEBUGPROC callback is defined with APIENTRY which is a windows
specific calling convention. That macro expands to nothing when building
on other platforms.
Fixes: #3268.
Go back to installing our debug message callback
unconditionally if G_ENABLE_CONSISTENCY_CHECKS is
defined, and allow opting into it using GDK_DEBUG=gl-debug
otherwise.
Always install the debug message callback when we can
and GDK_DEBUG=gl-debug is specified. Previously, we
were only installing the callback when the build was
a non-optimized debug build.
The gdk-pixbuf non-rgba format can be directly uploaded without
conversion.
The rgba format needs alpha premultiplication though, which is not
supported by GL during upload.
GLES doesn't support the GL_BGRA + GL_UNSIGNED_INT_24_8 hack that
we use on desktop OpenGL to upload textures directly in the cairo
pixel format. This adds the required conversions to all the places
that currently need it.
We also add a data_format to the internal gdk_gl_context_upload_texture()
function to make it clearer what the format are. Currently it is always
the cairo image surface format, but eventually we want to support other
formats so that we can avoid some of the unnecessary conversions we do.
Also, the current gdk_gl_context_upload_texture() code always converts
to a cairo format and uploads that like we did before. Later commits
will allow this to use other upload formats that gl supports to avoid
conversions.
gdk_gl_context_has_framebuffer_blit() and gdk_gl_context_has_frame_terminator()
were only used by by GDK/Win32, and they do not provide performance advantages
in GTK master, so clean up the code a bit by dropping them.
We need to use GL_BGRA instead of GL_RGBA when doing glReadPixels() on
EGL on Windows (ANGLE) so that the red and blue bits won't be displayed
inverted.
Also fix the logic where we determine whether to bit blit or redraw
everything.
This is for adding a EGL-based renderer which is done via the ANGLE
project, which translate EGL calls to Direct3D 9/11. This is done as a
possible solution to issue #105, especially for cases where the needed
full GL extensions to map OpenGL to Direct3D is unavailable or
unreliable, or when the OpenGL implementation from the graphics drivers
are problematic.
To enable this, do the following:
-Build ANGLE and ensure the ANGLE libEGL.dll and libGLESv2.dll are
available. A sufficiently-recent ANGLE is needed for things to
work correctly--note that the copy of ANGLE that is included in
qtbase-5.10.1 is sufficient. ANGLE is licensed under a BSD 3-clause
license.
-Build libepoxy on Windows with EGL support enabled.
-Currently, prior to running GTK+ programs, the GDK_DEBUG envvar needs
to be set with gl-gles as at least one of the flags.
Known issues:
-Only OpenGL ES 3 is supported, ANGLE's ES 2 does not support the needed
extensions, notably GL_OES_vertex_array_object, but its ES 3 support is
sufficient.
-There is no autodetection or fallback mechanism to enable using
EGL/Angle automatically yet. There are no plans to do this in this
commit.
For a given OpenGL context, macOS in particular does not support enumeration / detection of OpenGL features that have been promoted to core OpenGL functionality. It is possible other drivers are the same. This change assumes support for GL_ARB_texture_non_power_of_two with OpenGL 2.0+, GL_ARB_texture_rectangle with OpenGL 3.1+ and GL_EXT_framebuffer_blit with OpenGL 3.0+. I failed to find definitive information on whether GL_GREMEDY_frame_terminator has been promoted to OpenGL core, or whether GL_ANGLE_framebuffer_blit or GL_EXT_unpack_subimage have been promoted to core in OpenGL ES. This change results in a significant GtkGLArea performance boost on macOS.
Closes#2428
We need to store the region *before* adding our own damage area, because
we want to only store the changes of this frame, not the whole history.
So do it in the same place Vulkan does it.
Fixes#1900
We used to pass 2 regions to GdkDrawCotnext.end_frame() but code was
confusing what they meant. So we now don't do that anymore and only pass
the region that matters: The frame region.
As they require a draw context and the draw context is already bound to
the surface, it makes much more sense and reduces abiguity by moving
these APIs to the draw context.
As a side effect, we simplify GdkSurface APIs to a point where
GdkSurface now does not concern itself with drawing anymore at all,
apart from being the object that creates draw contexts.
That way, we can store the right region there: The actual painted area
instead of the exposed area (which is way too small).
Also, the GL context is the only user of this data, so storing it there
seems way smarter.
If G_ENABLE_CONSISTENCY_CHECKS is defined (i.e. if our buildtype is
'debug'), add a opengl debug callback that prints all debug messages
with a severity higher than SEVERITY_NOTIFICATION as a warning to the
console.
This is an automatic rename of various things related
to the window->surface rename.
Public symbols changed by this is:
GDK_MODE_WINDOW
gdk_device_get_window_at_position
gdk_device_get_window_at_position_double
gdk_device_get_last_event_window
gdk_display_get_monitor_at_window
gdk_drag_context_get_source_window
gdk_drag_context_get_dest_window
gdk_drag_context_get_drag_window
gdk_draw_context_get_window
gdk_drawing_context_get_window
gdk_gl_context_get_window
gdk_synthesize_window_state
gdk_surface_get_window_type
gdk_x11_display_set_window_scale
gsk_renderer_new_for_window
gsk_renderer_get_window
gtk_text_view_buffer_to_window_coords
gtk_tree_view_convert_widget_to_bin_window_coords
gtk_tree_view_convert_tree_to_bin_window_coords
The commands that generated this are:
git sed -f g "GDK window" "GDK surface"
git sed -f g window_impl surface_impl
(cd gdk; git sed -f g impl_window impl_surface)
git sed -f g WINDOW_IMPL SURFACE_IMPL
git sed -f g GDK_MODE_WINDOW GDK_MODE_SURFACE
git sed -f g gdk_draw_context_get_window gdk_draw_context_get_surface
git sed -f g gdk_drawing_context_get_window gdk_drawing_context_get_surface
git sed -f g gdk_gl_context_get_window gdk_gl_context_get_surface
git sed -f g gsk_renderer_get_window gsk_renderer_get_surface
git sed -f g gsk_renderer_new_for_window gsk_renderer_new_for_surface
(cd gdk; git sed -f g window_type surface_type)
git sed -f g gdk_surface_get_window_type gdk_surface_get_surface_type
git sed -f g window_at_position surface_at_position
git sed -f g event_window event_surface
git sed -f g window_coord surface_coord
git sed -f g window_state surface_state
git sed -f g window_cursor surface_cursor
git sed -f g window_scale surface_scale
git sed -f g window_events surface_events
git sed -f g monitor_at_window monitor_at_surface
git sed -f g window_under_pointer surface_under_pointer
(cd gdk; git sed -f g for_window for_surface)
git sed -f g window_anchor surface_anchor
git sed -f g WINDOW_IS_TOPLEVEL SURFACE_IS_TOPLEVEL
git sed -f g native_window native_surface
git sed -f g source_window source_surface
git sed -f g dest_window dest_surface
git sed -f g drag_window drag_surface
git sed -f g input_window input_surface
git checkout NEWS* po-properties po docs/reference/gtk/migrating-3to4.xml
This renames the GdkWindow class and related classes (impl, backend
subclasses) to surface. Additionally it renames related types:
GdkWindowAttr, GdkWindowPaint, GdkWindowWindowClass, GdkWindowType,
GdkWindowTypeHint, GdkWindowHints, GdkWindowState, GdkWindowEdge
This is an automatic conversion using the below commands:
git sed -f g GdkWindowWindowClass GdkSurfaceSurfaceClass
git sed -f g GdkWindow GdkSurface
git sed -f g "gdk_window\([ _\(\),;]\|$\)" "gdk_surface\1" # Avoid hitting gdk_windowing
git sed -f g "GDK_WINDOW\([ _\(]\|$\)" "GDK_SURFACE\1" # Avoid hitting GDK_WINDOWING
git sed "GDK_\([A-Z]*\)IS_WINDOW\([_ (]\|$\)" "GDK_\1IS_SURFACE\2"
git sed GDK_TYPE_WINDOW GDK_TYPE_SURFACE
git sed -f g GdkPointerWindowInfo GdkPointerSurfaceInfo
git sed -f g "BROADWAY_WINDOW" "BROADWAY_SURFACE"
git sed -f g "broadway_window" "broadway_surface"
git sed -f g "BroadwayWindow" "BroadwaySurface"
git sed -f g "WAYLAND_WINDOW" "WAYLAND_SURFACE"
git sed -f g "wayland_window" "wayland_surface"
git sed -f g "WaylandWindow" "WaylandSurface"
git sed -f g "X11_WINDOW" "X11_SURFACE"
git sed -f g "x11_window" "x11_surface"
git sed -f g "X11Window" "X11Surface"
git sed -f g "WIN32_WINDOW" "WIN32_SURFACE"
git sed -f g "win32_window" "win32_surface"
git sed -f g "Win32Window" "Win32Surface"
git sed -f g "QUARTZ_WINDOW" "QUARTZ_SURFACE"
git sed -f g "quartz_window" "quartz_surface"
git sed -f g "QuartzWindow" "QuartzSurface"
git checkout NEWS* po-properties
Remove all the old 2.x and 3.x version annotations.
GTK+ 4 is a new start, and from the perspective of a
GTK+ 4 developer all these APIs have been around since
the beginning.
As far as possible, use per-display debug flags.
This will minimize the debug spew that we get from
the inspector if it is running on a separate display.
This is a way to query the damaged area of the backbuffer.
The GL renderer uses this to compute the extents of that damage region
(computed via buffer age) and use them to minimize the area to redraw.
This changes the semantics of GL rendering to "When calling
gdk_window_begin_frame() with a GL context, the area by
gdk_gl_context_get_damage() needs to be redrawn and every other pixel of
the backbuffer is guaranteed to be correct.
After gdk_window_end_frame() on a GL-drawn window, the whole backbuffer
must be correct.
We can always glXBufferSwap() now because of this.
... instead of a gl context.
This requires some refactoring in the way we mark the shared context as
drawing: We now call begin_frame/end_frame() on it and ignore the call
on the main context.
Unfortunately we need to do this check in all vfuncs, which sucks. But I
haven't found a better way.
This way, we can query the GL context's state via
gdk_gl_context_is_drawing().
Use this function to make GL contexts as attached and grant them access
to the front/backbuffer for rendering.
All of this is still unused because GL drawing is still disabled.
Now that the use_es field is an int with a possible negative value, we
cannot use it its truth value directly; we need to check if it's a
positive value, instead.
GDK defaults to asking for an OpenGL 3.2 Core Profile, but if we get a
legacy profile from the underlying windowing system, the OpenGL version
will be fixed to 3.0. If that happens, we need to set the legacy bit on
the GdkGLContext, since that bit will be used to determine the version
and type of GLSL shaders that will be used by application and toolkit
code alike.
We've already set ->use_es correctly at context creation time, all this
can possibly do is change our mind about what kind of GL we're using.
Signed-off-by: Adam Jackson <ajax@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=773180
Calling gdk_gl_context_realize() should always result in a valid result,
so we need to provide a default implementation, to avoid a call to a
NULL function pointer.
When uploading a Cairo image surface to a GL texture we cannot use
GL_BGRA and GL_UNSIGNED_INT_8_8_8_8_REV on OpenGL ES, as they do not
exist in the core spec.
On some platforms we can ask the GL context machinery to create a GLES
context, instead of a GL one.
In order to ask for a GLES context at GdkGLContext realization time, we
use a bit field like we do for forward compatible, or debug contexts.
The 'use-es' bit also changes the way we select a default version,
because OpenGL and OpenGLES versions differ.
https://bugzilla.gnome.org/show_bug.cgi?id=743746
The g_print documentation explicitly says not to do this, since
g_print is meant to be redirected by applications. Instead use
g_message for logging that can be triggered via GTK_DEBUG.
We want to have the ability to fall back to legacy GL contexts when
creating them. In order to do so, we need to store the legacy bit on the
GdkGLContext, as well as being able to query it.
Setting the legacy bit from outside GDK is not possible; we cannot
create GL contexts in 3.2 core profile *and* compatibility modes at the
same time, and if we allowed users to select the legacy mode themselves,
it would break the creation of the GdkWindow's paint GL context.
What we do allow is falling back to legacy GL context if the platform
does not support 3.2 core profiles — for instance, on older GPUs or
inside virtualized environments.
We are also going to use the legacy bit internally, to choose which GL
API we can use when drawing GL content.
https://bugzilla.gnome.org/show_bug.cgi?id=756142
If the user requests a version less than 3.2 the version is forced to 3.2.
Previous checking code have an inconsistent behavior depending on which
minor version number was specified. This is avoided now by temporarily
converting the major/minor pair into a single integer.
https://bugzilla.gnome.org/show_bug.cgi?id=744288
The existence of OpenGL implementations that do not provide the full
core profile compatibility because of reasons beyond the technical, like
llvmpipe not implementing floating point buffers, makes the existence of
GdkGLProfile and documenting the fact that we use core profiles a bit
harder.
Since we do not have any existing profile except the default, we can
remove the GdkGLProfile and its related API from GDK and GTK+, and sweep
the whole thing under the carpet, while we wait for an extension that
lets us ask for the most compatible profile possible.
https://bugzilla.gnome.org/show_bug.cgi?id=744407
Store the OpenGL version when we first do the extensions check; this
allows client code to check the available GL version without requiring a
call to gdk_gl_context_make_current() and epoxy_gl_version().
Now that we have a two-stages GL context creation sequence, we can move
the profile to a pre-realize option, like the debug and forward
compatibility bits, or the GL version to use.
We simply don't want to care about legacy OpenGL.
All supported platforms also have support for OpenGL ≥ 3.2; it would
complicate the internal code; and would force us to use legacy GL
contexts internally if the first context created by the user is a legacy
GL context, and disable creation of core-3.2 contexts after that.
We will need to fix all our code examples to use the Core 3.2 profile.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
Users of the GdkGLContext API should be allowed to set properties on the
shim GdkGLContext instance prior to realization, so that the
backend-specific implementation can use the value of those properties
when creating the windowing system specific resources.
The main three options are:
• a major/minor version tuple, to request a specific GL version
• a debug bit, to request a "debug context", which provides additional
validation and run time checking
• a forward compatibility bit, to request a context that does not
have deprecated functionality
See also:
- https://www.opengl.org/registry/specs/ARB/glx_create_context.txthttps://bugzilla.gnome.org/show_bug.cgi?id=741946
One of the major requests by OpenGL users has been the ability to
specify settings when creating a GL context, like the version to use
or whether the debug support should be enabled.
We have a couple of requirements in terms of API:
• avoid, if at all possible, the "C arrays of integers with
attribute, value pairs", which are hard to write and hard
to bind in non-C languages.
• allow failing in a recoverable way.
• do not make the GL context creation API a mess of arguments.
Looking at prior art, it seems that a common pattern is to split the
construction phase in two:
• a first phase that creates a GL context wrapper object and
does preliminary checks on the environment.
• a second phase that creates the backend-specific GL object.
We adopted a similar pattern:
• gdk_window_create_gl_context() creates a GdkGLContext
• gdk_gl_context_realize() creates the underlying resources
Calling gdk_gl_context_make_current() also realizes the context, so
simple GL users do not need to care. Advanced users will want to
call gdk_window_create_gl_context(), set up the optional requirements,
and then call gdk_gl_context_realize(). If either of these two steps
fails, it's possible to recover by changing the requirements, or simply
creating a new GdkGLContext instance.
https://bugzilla.gnome.org/show_bug.cgi?id=741946
- Specifically request GL version when creating context. Just specifying core
profile bit results in the requested version defaulting to 1.0 which causes
the core profile bit to be ignored and an arbitrary compatability context to be
returned.
- Fix GL painting by removing GL calls that have been depricated by the 3.2 core
profile.
- Additionally remove glInvalidateFramebuffer() call, it is not supported by 3.2
core.
https://bugzilla.gnome.org/show_bug.cgi?id=742953
As the alignments, strides and image formats may be different across
platforms, make the texture upload a vfunc to allow backends to override
the GL commands for uploading textures for the software implementation for
gdk_gl_texture_from_surface(), if necessary.
Suggested by Alex to avoid copying non-trivial portions of code which would
then add maintainenace burden.
https://bugzilla.gnome.org/show_bug.cgi?id=740795
If buffer age is undefined and the updated area is not the whole
window then we use bit-blits instead of swap-buffers to end the
frame.
This allows us to not repaint the entire window unnecessarily if
buffer_age is not supported, like e.g. with DRI2.
We need to use this in the code path where we make the context
non-current during destroy, because at that point the window
could be destroyed and gdk_window_get_display() would return
NULL.
This is not really needed. The gl context is totally tied to the
window it is created from by virtue of sharing the context with the
paint context of that window and that context always has the visual
of the window (which we already can get).
Also, all user visible contexts are essentially offscreen contexts, so
a visual doesn't make sense for them. They only use FBOs which have
whatever format that the users sets up.
To properly support multithreaded use we use a global GPrivate
to track the current context. Since we also don't need to track
the current context on the display we move gdk_display_destroy_gl_context
to GdkGLContext::discard.
Its not really reasonable to handle failures to make_current, it
basically only happens if you pass invalid arguments to it, and
thats not something we trap on similar things on the X drawing side.
If GL is not supported that should be handled by the context creation
failing, and anything going wrong after that is essentially a critical
(or an async X error).
We make user facing gl contexts not attached to a surface if possible,
or attached to dummy surfaces. This means nothing can accidentally
read/write to the toplevel back buffer.
This adds the new type GdkGLContext that wraps an OpenGL context for a
particular native window. It also adds support for the gdk paint
machinery to use OpenGL to draw everything. As soon as anyone creates
a GL context for a native window we create a "paint context" for that
GdkWindow and switch to using GL for painting it.
This commit contains only an implementation for X11 (using GLX).
The way painting works is that all client gl contexts draw into
offscreen buffers rather than directly to the back buffer, and the
way something gets onto the window is by using gdk_cairo_draw_from_gl()
to draw part of that buffer onto the draw cairo context.
As a fallback (if we're doing redirected drawing or some effect like a
cairo_push_group()) we read back the gl buffer into memory and composite
using cairo. This means that GL rendering works in all cases, including
rendering to a PDF. However, this is not particularly fast.
In the *typical* case, where we're drawing directly to the window in
the regular paint loop we hit the fast path. The fast path uses opengl
to draw the buffer to the window back buffer, either by blitting or
texturing. Then we track the region that was drawn, and when the draw
ends we paint the normal cairo surface to the window (using
texture-from-pixmap in the X11 case, or texture from cairo image
otherwise) in the regions where there is no gl painted.
There are some complexities wrt layering of gl and cairo areas though:
* We track via gdk_window_mark_paint_from_clip() whenever gtk is
painting over a region we previously rendered with opengl
(flushed_region). This area (needs_blend_region) is blended
rather than copied at the end of the frame.
* If we're drawing a gl texture with alpha we first copy the current
cairo_surface inside the target region to the back buffer before
we blend over it.
These two operations allow us full stacking of transparent gl and cairo
regions.