Removed via regex and grep.
The following were intentionally not removed:
- GtkImage:file: (attributes org.gtk.Property.set=gtk_image_set_from_file)
- GtkImage:resource: (attributes org.gtk.Property.set=gtk_image_set_from_resource)
As they have no getter and (setter PROP) without a (getter PROP) crash
gobject-introspection. This is fixed by
ad3118eb51.
The annotations should only be set when the name of the setter or getter
for a property "GtkClassName:prop-name" is not gtk_class_name_g(s)et_property_name.
Add buttons for loading the Portland Rose, and a nameless large
png. Make them load the texture in a thread, to demonstrate better
handling of large images.
linear will average all the pixels for the lod, nearest will just pick
one (using the same method as OpenGL/Vulkan, picking bottom right
center).
This doesn't really make linear/nearest filtering work as it should
(because it's still a form of mipmaps), but it has 2 advantages:
1. it gets closer to the desired effect
2. it is a lot faster
Because only 1 pixel is chosen from the original image, instead of
averaging all pixels, a lot less memory needs to be accessed, and
because memory access is the bottleneck for large images, the speedup is
almost linear with the number of pixels not accessed.
And that means that even for lot level 3, aka 1/8th scale, only 1/64 of
the pixels need to be accessed, and everything is 50x faster.
Switching gtk4-demo --run=image_scaling to linear/nearest makes all the
lag go away for me, even with a 64k x 64k image.
We have fast conversion functions, use those directly instead of calling
into gdk_memory_convert().
This is useful because as mentioned before, the main optimization here
is RGB8 => RGBA8 and we have a fastpath for that.
Why do we need this? Because RGB images are provided in RGB format but
GPUs can't handle RGB, only RGBA, so we need to convert.
And we need to do that without allocating too much memory, because
allocating memory is slow. Which means in aprticular we need to do the
conversion after mipmapping, not before (like we were doing).
This allows uploading less memory but requires computing lod levels on
the CPU which is slow because it reads through all of the memory and so
far entirely not optimized.
However, it uses significantly less VRAM.
This is done by adding a gdk_memory_mipmap() function that does this
task.
The texture upload op now accepts a lod level and if that is >0 it uses
gdk_memory_mipmap() on the source texture.
This is just the API. Users will come later.
I considered putting it into gdkmemoryformat.c because it's likely gonna
be the only user and this one function is so little code, but it didn't
fit at all.
So now it's a new file.
rgba(from @foo ...) would crash if @foo was not a named color.
Handle it as we do elsewhere, by returning NULL from resolve().
Test included.
Fixes: #6985
If the popover isn't visible, no need to do any extra
'cascade' work. This also helps to avoid running into
trouble during finalization when the parents are already
gone.
wglGetExtensionsStringARB takes an HDC argument even though it
checks extensions for the current context. This was done for future
extensibility. From [1]:
> Should this function take an hdc? It seems like a good idea. At
> some point MS may want to incorporate this into OpenGL32. If they
> do this and and they want to support more than one ICD, then an HDC
> would be needed.
Currently the HDC argument is unused, but still wglGetExtensionsStringARB()
is required to check if HDC is valid:
> If <hdc> does not indicate a valid device context then the function
> fails and the error ERROR_DC_NOT_FOUND is generated. If the function
> fails, the return value is NULL. To get extended error information,
> call GetLastError.
So wglGetExtensionsStringARB fails if we pass NULL. Here we can pass any
valid HDC, like for example the screen DC returned by GetDC(NULL), but
probably using wglGetCurrentDC() makes most sense.
Reference:
[1] - https://registry.khronos.org/OpenGL/extensions/ARB/WGL_ARB_extensions_string.txt