The easiest things trigger the silliest mistakes. Add tests
for various properties we want our transfer functions to have,
such as:
- be inverse of each other
- stay within the defined ranges
- by symmetric around 0
Set primaries without name if supported, when named primaries are not.
But prefer named primaries if available.
This is just an attempt at defensive coding.
If we get sent primaries with the values as named primaries, treat them
like named primaries.
Fixes colorstate support on Kwin, which never sends named primaries.
If the texture covers all of the black background (like when watching a
1080p stream fullscreen on a 1080p monitor) we don't need a compositor
with single pixel support.
Fixes offloading in Kwin.
There's a ton of error checking happening that we want to do.
Because it turns out it is not really useful to create a subsurface for
the single pixel buffer when we don't even support single pixel buffers.
begin_frame_full does not return a reference, we assume that the
color state is staying alive for the duration of the frame anyway,
so end_frame simply sets priv->color_state to NULL.
We need to round outwards and a 1x1 rectangle with offset 0.5,0.5 should
end up as a 3x3 rectangle with offset 0,0 when rounded, not as a 2x2
rectangle.
We need to round outwards and a 1x1 rectangle with offset 0.5,0.5 should
end up as a 3x3 rectangle with offset 0,0 when rounded, not as a 2x2
rectangle.
Unless there is a very good reason to use memcpy(), don't use it.
Not using it makes the compiler not screw up and waste tons of CPU that
it could have not wasted.
Gets my framerate back from 1250 => 1750 and makes sysprof no longer
report ~40% of render time spent in gsk_gpu_colorize_op().
We know it at begin_frame() time, so if we pass it there instead of
end_frame(), we can use it then to make decisions about opacity.
For example, we could notice that the whole surface is opaque and choose
an RGBx format.
We don't do that yet, but now we could.
The opaque rect from the rendernodes are now used to set the opaque
region in the backend.
This means applications can now set a transparent window background and
make indivual parts of their window opaque.
But because this is a best effort method, it is not guaranteed to
succeed in finding all opaque regions, in particular if the rendernodes
used to build it are not straightforward to analyze.
We are using so many internal extra features that it is no longer a good
idea to use these functions.
And they aren't really used anyway.
These extra features are also constantly in flux and rely on internal
APIs, so exposing them would just cause extra pain.
By using the inlining macro trick, we can work around deprecation
warnings from removing this function as a public API, which will happen
in the next commits.
GLContexts marked as surface_attached are always attached to the surface
in make_current().
Other contexts continue to only get attached to their surface between
begin_frame() and end_frame().
All our renderer use surface-attached contexts now.
Public API only gives out non-surface-attached contexts.
The benefit here is that we can now choose whenever we want to
call make_current() because it will not cause a re-make_current() if we
call it outside vs inside the begin/end_frame() region.
Or in other words: I want to call make_current() before begin_frame()
without a performance penalty, and now I can.
... and pass the opaque region of the node.
We don't do anything with it yet, this is just the plumbing.
The original function still exists, it passes NULL which is the value
for no opaque region at all.
If an image description query is running while the surface gets
destroyed, we were not properly cleaning up, causing the callbacks to be
emitted on freed variables.
We were comparing with destination stride, not with source stride, and
in rare cases when those were different, this would trigger aborts in
the testsuite.
Each time we create a new window, we create a new EGLSurface. Each time
we destroy a window, we failed to destroy the EGLSurface, due to passing
a GdkDisplay instead of a EGLDisplay to eglDestroySurface().
This effectively leaked not only the EGL surface metadata, but also the
associated DMA buffers. For applications where one opens and closes many
windows over the lifetime of the application, and where the application
runs for a long time; for example a terminal emulator server, this
causes a significant memory leak, as the memory will only ever be freed
once once the application process itself exits, if ever.
Fix this passing an actual EGLDisplay instead of an GdkDisplay, to
eglDestroySurface().
This matches what the gpu renderer does when printing
colorstates.
It also avoids it printing "S*RGBA8" for the format and instead prints
"SRGBA8(p)" now.