This gets rather complicated because MSL does not support OpArrayLength
natively. We need to pass down a buffer which contains buffer sizes, and
we compute the array length on-demand.
Support both discrete descriptors as well as argument buffers.
MSL generally emits the aliases, which means we cannot always place the
master type first, unlike GLSL and HLSL. The logic fix is just to
reorder after we have tagged types with packing information, rather than
doing it in the parser fixup.
Change aux buffer to swizzle buffer.
There is no good reason to expand the aux buffer, so name it
appropriately.
Make the code cleaner by emitting a straight pointer to uint rather than
a dummy struct which only contains a single unsized array member anyways.
This will also end up being very similar to how we implement swizzle
buffers for argument buffers.
Do not use implied binding if it overflows int32_t.
Some support for subgroups is present starting in Metal 2.0 on both iOS
and macOS. macOS gains more complete support in 10.14 (Metal 2.1).
Some restrictions are present. On iOS and on macOS 10.13, the
implementation of `OpGroupNonUniformElect` is incorrect: if thread 0 has
already terminated or is not executing a conditional branch, the first
thread that *is* will falsely believe itself not to be. Unfortunately,
this operation is part of the "basic" feature set; without it, subgroups
cannot be supported at all.
The `SubgroupSize` and `SubgroupLocalInvocationId` builtins are only
available in compute shaders (and, by extension, tessellation control
shaders), despite SPIR-V making them available in all stages. This
limits the usefulness of some of the subgroup operations in fragment
shaders.
Although Metal on macOS supports some clustered, inclusive, and
exclusive operations, it does not support them all. In particular,
inclusive and exclusive min, max, and, or, and xor; as well as cluster
sizes other than 4 are not supported. If this becomes a problem, they
could be emulated, but at a significant performance cost due to the need
for non-uniform operations.
Avoids ugly warnings on nearly every compute shader.
We could do analysis to detect whether we need to emit this constant,
but it's a bit tedious to figure out if an OpConstantComponent is
actually used by opcodes, so just make it simple.
This is necessary to deal with indirect draws, where the draw parameters
are given in a buffer instead of passed by the CPU. For normal draws,
the draw parameters are set with Metal's `setVertexBytes:` method.
This undoes the change to add the vertex count to the aux buffer,
rendering that entire discussion largely moot. Oh well. It was a
discussion that needed to happen anyway.
Structs are aligned as you would expect in MSL (maximum member
alignment), and it is not minimum 16 bytes like in std140.
Also rename the dummy "pad" members to a reserved naming scheme.
This is a fairly fundamental change on how IDs are handled.
It serves many purposes:
- Improve performance. We only need to iterate over IDs which are
relevant at any one time.
- Makes sure we iterate through IDs in SPIR-V module declaration order
rather than ID space. IDs don't have to be monotonically increasing,
which was an assumption SPIRV-Cross used to have. It has apparently
never been a problem until now.
- Support LUTs of structs. We do this by interleaving declaration of
constants and struct types in SPIR-V module order.
To support this, the ParsedIR interface needed to change slightly.
Before setting any ID with variant_set<T> we let ParsedIR know
that an ID with a specific type has been added. The surface for change
should be minimal.
ParsedIR will maintain a per-type list of IDs which the cross-compiler
will need to consider for later.
Instead of looping over ir.ids[] (which can be extremely large), we loop
over types now, using:
ir.for_each_typed_id<SPIRVariable>([&](uint32_t id, SPIRVariable &var) {
handle_variable(var);
});
Now we make sure that we're never looking at irrelevant types.
When trying to validate buffer sizes, we usually need to bail out when
using SpecConstantOps, but for some very specific cases where we allow
unsized arrays currently, we can safely allow "unknown" sized arrays as
well.
This is probably the best we can do, when we have even more difficult
cases than this, we throw a more sensible error message.
Support MSL typedefs to declare 3-row row-major matrices as 3-column matrices.
Allow those matrices to be decorated as packed.
Support transposing those matrices when used.
Modify how member alignments are calculated.
Support Workgroup (threadgroup) variables.
Mark if SPIRConstant is used as an array length, since it cannot be specialized.
Resolve specialized array length constants.
Support passing an array to MSL function.
Support emitting GLSL array assignments in MSL via an array copy function.
Support for memory and control barriers.
Struct packing enhancements, including packing nested structs.
Enhancements to replacing illegal MSL variable and function names.
Add Compiler::get_entry_point_name_map() function to retrieve entry point renamings.
Remove CompilerGLSL::clean_func_name() as obsolete.
Fixes to types in bitcast MSL functions.
Add Variant::get_id() member function.
Add CompilerMSL::Options::msl_version option.
Add numerous MSL compute tests.
Add bool members is_read and is_written to SPIRType::Image.
Output correct texture read/write access by marking whether textures
are read from and written to by the shader.
Override bitcast_glsl_op() to use Metal as_type<type> functions.
Add implementations of SPIR-V functions inverse(), degrees() & radians().
Map inverseSqrt() to rsqrt().
Map roundEven() to rint().
GLSL functions imageSize() and textureSize() map to equivalent
expression using MSL get_width() & get_height() functions.
Map several SPIR-V integer bitfield functions to MSL equivalents.
Map SPIR-V atomic functions to MSL equivalents.
Map texture packing and unpacking functions to MSL equivalents.
Refactor existing, and add new, image query functions.
Reorganize header lines into includes and pragmas.
Simplify type_to_glsl() logic.
Add MSL test case vert/functions.vert for added function implementations.
Add MSL test case comp/atomic.comp for added function implementations.
test_shaders.py use macOS compilation for MSL shader compilation validations.
Add to suite of MSL tests and references any existing GLSL tests
that successfully convert GLSL->SPIRV->MSL and compile as MSL.
test_shaders_helper() ignores hidden files that start with '.',
to avoid accidentally finding hidden OSX files such as .DS_Store.
Use xcrun to compile MSL shaders instead of hard-coded path to Metal compiler.
Wrap calls to xcrun in exception handling to ignore if Xcode not installed.
For MSL tests, move call to validate_shader_msl() to after call to
regression_check() to allow a converted MSL shader to be saved for
manual review even if it doesn't successfully compile as MSL.