This is kinda tricky, because if we only conditionally write to a
function parameter variable it is implicitly preserved in SPIR-V, so we must force
an in qualifier on the parameter to get the same behavior in GLSL.
CompilerMSL accesses options using same design pattern as CompilerGLSL and CompilerHLSL.
CompilerMSL support setting VA & rez binding specs via either constructor or compile() method overload.
CompilerMSL support single UBO packing and padding in single pass.
spriv_cross app (main.cpp) supports turning off UBO packing and padding via command line option.
Add MSL UBO alignment test shader.
Compiler MSL support DimBuffer as image dimension.
CompilerMSL check texture coordinate result type dimension before adding swizzles.
Update MSL reference shaders affected by this update.
In some cases, the compiler decided to emit continue block first,
which invalidated the expressions used by the condition.
Parameters to functions can be evaluated in any order which caused
"random" behavior.
Add to suite of MSL tests and references any existing GLSL tests
that successfully convert GLSL->SPIRV->MSL and compile as MSL.
test_shaders_helper() ignores hidden files that start with '.',
to avoid accidentally finding hidden OSX files such as .DS_Store.
Use xcrun to compile MSL shaders instead of hard-coded path to Metal compiler.
Wrap calls to xcrun in exception handling to ignore if Xcode not installed.
For MSL tests, move call to validate_shader_msl() to after call to
regression_check() to allow a converted MSL shader to be saved for
manual review even if it doesn't successfully compile as MSL.
If we run on a system with Xcode installed to a default path, run Xcode
Metal shader compiler to validate the generated MSL shader.
This uncovers an issue in the existing MSL test - MSL backend currently
does not auto-assign attribute locations, which means that translating
GLSL shader without location layout produces an invalid MSL which
generates "error: 'attribute' index '0' is used more than once".
To extract a column from row-major matrix, we need to do a strided load one
component at a time. In this case flattened_access_chain_offset still returns
the offset to the first element, but the stride is equal to matrix stride
instead of vector stride.
For this to work, we need to pass matrix stride (and transpose flag) through,
similar to how matrix flattening works.
Additionally slightly clean up recursive flattened_access_chain structure -
specifically, instead of deciding mid-traversal that we need matrix stride
information, we can just pass the matrix stride through - for access chains
that end in matrix/vector this gets us what we need, and for access chains
that end in structs the flattened_access_chain_struct code will recompute
correct stride/transposition data to pass through further.
We currently only support access chains that end in a matrix by propagating
"needs transpose" flag upstream which flips the matrix multiplication order.
It's possible to support indexed extraction as well, however it would have to
generate code like this:
vec4 row = vec4(UBO[0].y, UBO[1].y, UBO[2].y, UBO[3].y);
for a column equivalent of:
vec4 row = UBO[1];
It is definitely possible to do so but it requires signaling the vector output
that it needs to switch to per-component extraction which is a bit more trouble
than this is worth for now.
Previously, we would generate parentheses proactively when generating
binary ops, however, this leads to uglier code and hits warnings in
compilers when used as a conditional.
The size of an array can be a specialization constant or a spec constant
op. This complicates things quite a lot.
Reflection becomes very painful in the presence of expressions instead
of literals so add a new array which expresses this.
It is unlikely that we will need to do accurate reflection of interface
types which have specialization constant size.
SSBOs and UBOs will for now throw exception if a dynamic size is used since it
is very difficult to know the real size.
- Only consider I/O variables if part of OpEntryPoint.
- Keep a safe fallback if #entry-points is 1 to avoid potentially
breaking previously working shaders.
There was a potential problem if variables were invalidated and SPIR-V
read expressions which depended on other expression which in turn depended on the
invalidated variable.
Also fixes issue where variables were considered immutable if they were
forwardable. This allowed some incorrect optimizations to slip through.
This is now fixed in ESSL 3.10 backend of glslang, so we can remove the old workaround
of dropping full memory barriers.
Also fixes unrelated issue which newer glslang detects.
OpName is only for debug information, so we must be very careful that
we do not reuse the same name for different variables.
This was previously done for local variables, but this commit extends
this to global variables as well.
In some cases we need to bitcast when dealing with int vs. uint.
SPIR-V allows inputs to be of different integer signedness, so we need
to deal with this somehow.
Add testing system to test SPIR-V assembly.
For now, test all possible combination for all major cases.
- IAdd (which doesn't care about input type as long as they're equal)
- SDiv/UDiv operations which case about input type.
- Arith/Logical right shifts.
- IEqual to test outputs to bvec, which shouldn't get output cast. Also
tests casting in function-like calls.