Set the alpha channel to "undefined" in this case.
Gimp doesn't seem to like this when opening the image and insists to
doing something with it, that's a bit of a bummer.
But it allows GTK to load RGBx textures.
It seems Mesa doesn't support that yet, but having it doesn't hurt.
And it allows drivers to allocate less memory for the swapchains,
because we don't need all the 4 images we request in minImageCount.
But drivers tend to take that minimum image count as gospel, so we need
to use a higher number to not cause lag in corner cases.
Removed via regex and grep.
The following were intentionally not removed:
- GtkImage:file: (attributes org.gtk.Property.set=gtk_image_set_from_file)
- GtkImage:resource: (attributes org.gtk.Property.set=gtk_image_set_from_resource)
As they have no getter and (setter PROP) without a (getter PROP) crash
gobject-introspection. This is fixed by
ad3118eb51.
The annotations should only be set when the name of the setter or getter
for a property "GtkClassName:prop-name" is not gtk_class_name_g(s)et_property_name.
linear will average all the pixels for the lod, nearest will just pick
one (using the same method as OpenGL/Vulkan, picking bottom right
center).
This doesn't really make linear/nearest filtering work as it should
(because it's still a form of mipmaps), but it has 2 advantages:
1. it gets closer to the desired effect
2. it is a lot faster
Because only 1 pixel is chosen from the original image, instead of
averaging all pixels, a lot less memory needs to be accessed, and
because memory access is the bottleneck for large images, the speedup is
almost linear with the number of pixels not accessed.
And that means that even for lot level 3, aka 1/8th scale, only 1/64 of
the pixels need to be accessed, and everything is 50x faster.
Switching gtk4-demo --run=image_scaling to linear/nearest makes all the
lag go away for me, even with a 64k x 64k image.
We have fast conversion functions, use those directly instead of calling
into gdk_memory_convert().
This is useful because as mentioned before, the main optimization here
is RGB8 => RGBA8 and we have a fastpath for that.
Why do we need this? Because RGB images are provided in RGB format but
GPUs can't handle RGB, only RGBA, so we need to convert.
And we need to do that without allocating too much memory, because
allocating memory is slow. Which means in aprticular we need to do the
conversion after mipmapping, not before (like we were doing).
This allows uploading less memory but requires computing lod levels on
the CPU which is slow because it reads through all of the memory and so
far entirely not optimized.
However, it uses significantly less VRAM.
This is done by adding a gdk_memory_mipmap() function that does this
task.
The texture upload op now accepts a lod level and if that is >0 it uses
gdk_memory_mipmap() on the source texture.
This is just the API. Users will come later.
I considered putting it into gdkmemoryformat.c because it's likely gonna
be the only user and this one function is so little code, but it didn't
fit at all.
So now it's a new file.
Depth of a rendernode should be determined by the textures used and the
compositing colorstate requirements.
Colors influence the colorstate choice, so they indirectly influence the
depth, but they should not influence the depth directly.
Otherwise a single color in a border being rec2100-pq would make us
switch to 16bit float.
Also remove gdk_color_get_depth(), because it was only used here and
because again: Colors should not influence depth decisions.
When prepare_gl fails in the right way (or the wrong way?), we
end up creating the leader window twice, and as a side effect,
creating two instances of the "Virtual core pointer" device, which
is bad news for grabs.
Fixes: #6840
Call wl_surface_offset in end_frame to apply the offset for drag
surfaces. This matches what the GL draw context already does, and
it fixes drag surfaces jumping at the beginning of the drag.
Fixes: #6972
We switched to using the Unicode (UTF-16) versions of the Windows API by
default, so we also obtain the display name in UTF-16 form as well.
This updates the implementation in the Windows backend so that we
properly acquire the names that we need in UTF-16, and then convert the
results to UTF-8, which is what we use in GTK/GLib.
After commit 447bc18c48 EGL on X11 broke.
But the handling of the GL context also was quite awkward because it was
unclear who was responsible for ensuring it got reset.
Change that by making gdk_gl_context_clear_current_if_surface() return
the context (with a reference because it might be the last reference) it
had unset, so that after changing EGL properties the code that caused
the clearing can re-make it current.
This moves the responsibility to the actual code that is dealing with
updating properties and frees the outer layers of code from that task.
And that means the X11 EGL code doesn't need to care and the code in the
Wayland backend that did care can be removed.
Related: !7662Fixes: #6964 on X11
This essentially reverts the changes from
c230546a2c but implies new semantics.
Namely, surface-attached contexts can now be bound to EGL_NO_SURFACE if
the windowing system isn't ready yet.
It is the task of the windowing system to make sure the context is
properly rebound when the contents become available.
We ensure this by checking in begin_frame() if we created the EGL window
and if we did, we make_current(). This works because creating the EGL
window creates the EGL surface and that does a clear_current(), so this
is always going to have the desired effect of re-making the current
context.
It is very convoluted though.
Fixes: #6964
Related: https://gitlab.freedesktop.org/mesa/mesa/-/issues/11784
We do this because:
a) The parent class (GdkGLContext) already stores the paint regions of
previous frames, no need to do the same.
b) The painted region passed to end_frame () includes the backbuffer's
damage region, so it's not really what we want.
This also fixes a leak of cairo_region_t that I introduced by mistake
in !7418