We now store all the relevant state of the image inside the VulkanImage
struct, so we can delay barriers for as long as possible.
Whenever we want to use an image, we call the new
gsk_vulkan_image_transition() and it will add a barrier to the desired
state if one is necessary.
This is a massive refactoring because it collects all the renderops
of all renderpasses into one long array in the Render object.
Lots of code in there is still flaky and needs cleanup. That will
follow in further commits.
Other than that it does work fine though.
All the ops that just execute a shader do pretty much the same stuff, so
put it all in a single function that they all call.
It's basically faking a base class for them.
Instead of creating a pipeline GObject, just ask for the VkPipeline.
And instead of having the Op handle it, just let the renderpass look
up/create the relevant pipeline while creating commands so that it can
insert vkCmdBindPipeline calls as-needed.
This reverts most of commit f420c143e0
again because it turns out GPUs like combined images and samplers.
But: The one thing we don't revert is allowing the C code to select any
combination of sampler and image:
gsk_vulkan_render_get_image_descriptor() now takes a 2nd argument
specifying the sampler.
This allows the same flexibility as before, we just combine things
early.
This change was inspired by
https://developer.nvidia.com/blog/vulkan-dos-donts/
Instead of creating the op manually, just pass in the renderpass and
have the op created from there.
This way ops aren't really initialized anymore, they are more appended
to the queue, so instead of foo_op_init() we can just call the function
foo_op().