This allows having different layouts sothat we can support immutable
samplers, whcih are required for multiplane and YUV formats.
We don't use them yet.
Introduce a new GskGpuImageDescriptors object that tracks descriptors
for a set of images that can be managed by the GPU.
Then have each GskGpuShaderOp just reference the descriptors object they are
using, so that the coe can set things up properly.
To reference an image, the ops now just reference their descriptor -
which is the uint32 we've been sending to the shaders since forever.
Use glDrawArraysInstancedBaseInstance() to draw. (Yay for GL naming.)
That allows setting up the offset in the vertex array without having to
glVertexAttribPointer() everything again.
However, this is only supported since GL 4.2 and not at all in stock GLES,
so we need to have code that can work without it.
Fortunately, it is mandatory in Vulkan, so every recent GPU supports it.
And if that GPU has a proper driver, it will also expose the GL extension
for it.
(Hint: You can check https://opengles.gpuinfo.org/listextensions.php for
how many proper drivers exist outside of Mesa.)
Because GL flips its shit sometimes (ie when it's the framebuffer),
pass the height of the target as the flip variable, so commands
that need to operate on the pixels can flip the y axis around this value.
This heaves over an inital chunk of code from the Vulkan renderer to
execute shaders.
The only shader that exists for now is a shader that draws a single
texture.
We use that to replace the blit op we were doing before.