Subsequent stages can legally attempt to read from these variables,
which causes compilation failure.
Always make sure we emit user outputs in vertex shaders if they are
active in the entry point.
We only considered invalid names, and overwrote the alias for the
function. The correct fix is to replace illegal names early, do the
reserved fixup, then copy back alias to entry point name.
This is necessary to avoid invalid output because of how implicit
dependencies on builtins work.
For example, the fixup for `BuiltInSubgroupEqMask` initializes the
variable based on `builtin_subgroup_invocation_id_id`, a field storing
the ID for a variable with decoration `BuiltInSubgroupLocalInvocationId`.
This could be either a variable that already exists in the input
(spirv_msl.cpp:300) or, if necessary, a newly created one
(spirv_msl.cpp:621). In both cases, though,
`builtin_subgroup_invocation_id_id` is only set under the condition
`need_subgroup_mask || needs_subgroup_invocation_id`.
`need_subgroup_mask` is true if any of the `BuiltInSubgroupXXMask` are
set in `active_input_builtins`.
Normally, if the program contains `BuiltInSubgroupEqMask`,
`Compiler::ActiveBuiltinHandler` will set it in `active_input_builtins`.
But this only happens if the variable is actually used, whereas
`fix_up_shader_inputs_outputs` loops over all variables in the program
regardless of whether they're used.
If `BuiltInSubgroupEqMask` is not used,
`builtin_subgroup_invocation_id_id` is never set, but before this patch
the fixup hook would try to use it anyway, producing MSL that references
a nonexistent variable named `_0`.
Avoid this by changing `fix_up_shader_inputs_outputs` to skip builtins
which are not set in `active_input_builtins` or
`active_output_builtins`. And add a test case.
Add support for declaring a fixed subgroup size. Metal, like Vulkan with
`VK_EXT_subgroup_size_control`, allows the thread execution width to
vary depending on factors such as register usage. Unfortunately, this
breaks several tests that depend on the subgroup size being what the
device says it is. So we'll fix the subgroup size at the size the device
declares. The extra invocations in the subgroup will appear to be
inactive. Because of this, the ballot mask builtins are now ANDed with
the active subgroup mask.
Add support for emulating a subgroup of size 1. This is intended to be
used by Vulkan Portability implementations (e.g. MoltenVK) when the
hardware/software combo provides insufficient support for subgroups.
Luckily for us, Vulkan 1.1 only requires that the subgroup size be at
least 1.
Add support for quadgroup and SIMD-group functions which were added to
iOS in Metal 2.2 and 2.3. This will allow clients to take advantage of
expanded quadgroup and SIMD-group support in recent Metal versions and
on recent Apple GPUs (families 6 and 7).
Gut emulation of subgroup builtins in fragment shaders. It turns out
codegen for the SIMD-group functions in fragment wasn't implemented for
AMD on Mojave; it's a safe bet that it wasn't implemented for the other
drivers either. Subgroup support in fragment shaders now requires Metal
2.2.
Metal doesn't support broadcasting or shuffling boolean values, but we
can work around that by casting it to `ushort`, then casting it back to
`bool`. I used `ushort` instead of `uint` because 16-bit values give
better throughput on Apple GPUs.
This should hopefully reduce underutilization of the GPU, especially on
GPUs where the thread execution width is greater than the number of
control points.
This also simplifies initialization by reading the buffer directly
instead of using Metal's vertex-attribute-in-compute support. It turns
out the only way in which shader stages are allowed to differ in their
interfaces is in the number of components per vector; the base type must
be the same. Since we are using the raw buffer instead of attributes, we
can now also emit arrays and matrices directly into the buffer, instead
of flattening them and then unpacking them. Structs are still flattened,
however; this is due to the need to handle vectors with fewer components
than were output, and I think handling this while also directly emitting
structs could get ugly.
Another advantage of this scheme is that the extra invocations needed to
read the attributes when there were more input than output points are
now no more. The number of threads per workgroup is now lcm(SIMD-size,
output control points). This should ensure we always process a whole
number of patches per workgroup.
To avoid complexity handling indices in the tessellation control shader,
I've also changed the way vertex shaders for tessellation are handled.
They are now compute kernels using Metal's support for vertex-style
stage input. This lets us always emit vertices into the buffer in order
of vertex shader execution. Now we no longer have to deal with indexing
in the tessellation control shader. This also fixes a long-standing
issue where if an index were greater than the number of vertices to
draw, the vertex shader would wind up writing outside the buffer, and
the vertex would be lost.
This is a breaking change, and I know SPIRV-Cross has other clients, so
I've hidden this behind an option for now. In the future, I want to
remove this option and make it the default.
On MSL, the compiler refuses to allow access chains into a normal vector type.
What happens in practice instead is a read-modify-write where a vector type is
loaded, modified and written back.
The workaround is to convert a vector into a pointer-to-scalar before
the access chain continues to add the scalar index.