We need to ensure that an EGL surface exists before we call
eglMakeCurrent() with it. Otherwise we might end up binding to
EGL_NO_SURFACE and then never revising that decision.
Which leads to not rendering to the backbuffer, but into the void.
Fixes X11 rendering being black
Fixes#6930
The current resizing implementation in the GDK-Win32 backend is not
telling GDK early enough for Vulkan that a resize in the surface (i.e.
HWND) is done, so that GDK can re-create swapchain in time, which is
apparent on nVidia drivers (and AMD drivers that utilize the mailbox
presentation mode on Windows) when the HWND is being enlarged
interactively.
To work around this, bar a refactor in the Windows resizing/presentation
code, is to call _gdk_surface_update_size() when we really did resize
the HWND when we handle queued resizes via SetWindowsPos().
The existing call in gdksurface-win32.c in
_gdk_win32_surface_compute_size() remains required, otherwise the
surface won't display initially.
Thanks to Benjamin Otte for pointing this possibility out.
We don't need to hardcode all the interface names as string literals,
since they come as part of the wl_interface structs in the protocol
bindings we use.
The easiest things trigger the silliest mistakes. Add tests
for various properties we want our transfer functions to have,
such as:
- be inverse of each other
- stay within the defined ranges
- by symmetric around 0
Set primaries without name if supported, when named primaries are not.
But prefer named primaries if available.
This is just an attempt at defensive coding.
If we get sent primaries with the values as named primaries, treat them
like named primaries.
Fixes colorstate support on Kwin, which never sends named primaries.
If the texture covers all of the black background (like when watching a
1080p stream fullscreen on a 1080p monitor) we don't need a compositor
with single pixel support.
Fixes offloading in Kwin.
There's a ton of error checking happening that we want to do.
Because it turns out it is not really useful to create a subsurface for
the single pixel buffer when we don't even support single pixel buffers.
begin_frame_full does not return a reference, we assume that the
color state is staying alive for the duration of the frame anyway,
so end_frame simply sets priv->color_state to NULL.
We need to round outwards and a 1x1 rectangle with offset 0.5,0.5 should
end up as a 3x3 rectangle with offset 0,0 when rounded, not as a 2x2
rectangle.
We need to round outwards and a 1x1 rectangle with offset 0.5,0.5 should
end up as a 3x3 rectangle with offset 0,0 when rounded, not as a 2x2
rectangle.
The backbuffer's damage region is the region of the backbuffer
that doesn't contain up-to-date contents. This is determined
by the backbuffer's age and previous frame's paint regions.
This enables incremental rendering
Unless there is a very good reason to use memcpy(), don't use it.
Not using it makes the compiler not screw up and waste tons of CPU that
it could have not wasted.
Gets my framerate back from 1250 => 1750 and makes sysprof no longer
report ~40% of render time spent in gsk_gpu_colorize_op().
We know it at begin_frame() time, so if we pass it there instead of
end_frame(), we can use it then to make decisions about opacity.
For example, we could notice that the whole surface is opaque and choose
an RGBx format.
We don't do that yet, but now we could.
The opaque rect from the rendernodes are now used to set the opaque
region in the backend.
This means applications can now set a transparent window background and
make indivual parts of their window opaque.
But because this is a best effort method, it is not guaranteed to
succeed in finding all opaque regions, in particular if the rendernodes
used to build it are not straightforward to analyze.
We are using so many internal extra features that it is no longer a good
idea to use these functions.
And they aren't really used anyway.
These extra features are also constantly in flux and rely on internal
APIs, so exposing them would just cause extra pain.
By using the inlining macro trick, we can work around deprecation
warnings from removing this function as a public API, which will happen
in the next commits.
GLContexts marked as surface_attached are always attached to the surface
in make_current().
Other contexts continue to only get attached to their surface between
begin_frame() and end_frame().
All our renderer use surface-attached contexts now.
Public API only gives out non-surface-attached contexts.
The benefit here is that we can now choose whenever we want to
call make_current() because it will not cause a re-make_current() if we
call it outside vs inside the begin/end_frame() region.
Or in other words: I want to call make_current() before begin_frame()
without a performance penalty, and now I can.
... and pass the opaque region of the node.
We don't do anything with it yet, this is just the plumbing.
The original function still exists, it passes NULL which is the value
for no opaque region at all.
If an image description query is running while the surface gets
destroyed, we were not properly cleaning up, causing the callbacks to be
emitted on freed variables.
We were comparing with destination stride, not with source stride, and
in rare cases when those were different, this would trigger aborts in
the testsuite.
Each time we create a new window, we create a new EGLSurface. Each time
we destroy a window, we failed to destroy the EGLSurface, due to passing
a GdkDisplay instead of a EGLDisplay to eglDestroySurface().
This effectively leaked not only the EGL surface metadata, but also the
associated DMA buffers. For applications where one opens and closes many
windows over the lifetime of the application, and where the application
runs for a long time; for example a terminal emulator server, this
causes a significant memory leak, as the memory will only ever be freed
once once the application process itself exits, if ever.
Fix this passing an actual EGLDisplay instead of an GdkDisplay, to
eglDestroySurface().
This matches what the gpu renderer does when printing
colorstates.
It also avoids it printing "S*RGBA8" for the format and instead prints
"SRGBA8(p)" now.
Converting to and from xyz turns out to be more difficult than
expected, depending on what whitepoint you choose, And different
specs choose different whitepoints, so we can't directly map
css xyz to cicp xyz anyway.
Add a function for converting a single color from one
color state to another. This is a generalization of the
already existing function to convert a GdkRGBA to another
color state.
The outlook for mutter supporting this in GNOME 47 are cloudy,
so lets flip the switch back. You can still set
USE_POINTER_VIEWPORT in the environment to try this code.
When the compositor sends us an image description, we currently happily
reuse it.
However, those image descriptions may contain optional properties that
we do not handle - example: reference white level. So if we were to
reuse that image description, we would set a wrong reference white
level.
To avoid issues like that, never use compositor-provided image
descriptions.
However, query those image descriptions and map them to the closest
GdkColorState, so that we can quickly look up *our* version of that
image description and use that one.
When finalizing a subsurface, we need to make sure it is removed
from the sibling lists in its parent, or bad things will happen.
This should crashes seen in Epiphany nightly.
Fixes: #6891
convert_func2 is a 'from' conversion function, ie it expects to
be passed the target color state. This was wrong both in
gdk_memory_convert and gdk_memory_convert_color_state.
This is widely assumed, but is not guaranteed by Standard C, and is
known to be false on CHERI architectures (which have 64-bit sizes and
128-bit tagged pointers). Add a static assertion to ensure that GTK
will not build on platforms where this assumption does not hold.
As discussed on GNOME/gtk!7510, if GTK switches from gsize to uintptr_t
as its representation of the underlying bits in a pointer, GTK maintainers
would prefer that to be done project-wide so that it's done consistently,
after which this static assertion could be removed.
At the time of writing, GLib makes the same assumption (GNOME/glib#2842),
but GLib contributors are gradually removing it (mostly by replacing gsize
with uintptr_t where a pointer-sized quantity is needed). Finishing
that work in GLib would be a prerequisite for being able to make GTK
work on the affected platforms.
Signed-off-by: Simon McVittie <smcv@debian.org>