This allows two variables of the same struct type to be flattened
into the same interface struct without a member name conflict.
Add shaders-msl/frag/in_block_with_multiple_structs_of_same_type.frag
unit test shader to demonstrate this.
* Fix '--msl-multi-patch-workgroup' cases where thread count exceeds data bounds
*Fix gl_PrimitiveID off by one error when computing last valid index
*Point gl_out to the last patch's data when threads exceed input data bounds
*Point patchOut to the last patch's data when threads exceed input data bounds
* Update MSL test expectations.
* Undo change to MSL multi-patch hull output bound checks
* Update MSL multi-patch test expectations.
Firstly, never flatten inputs or outputs in multi-patch mode.
The main scenario where we do need to care is Block IO.
In this case, we should only flatten the top-level member, and after
that we use access chains as normal.
Using structs in Input storage class is now possible as well. We don't
need to consider per-location fixups at all here. In Vulkan, IO structs
must match exactly. Only plain vectors can have smaller vector sizes as
a special case.
This should hopefully reduce underutilization of the GPU, especially on
GPUs where the thread execution width is greater than the number of
control points.
This also simplifies initialization by reading the buffer directly
instead of using Metal's vertex-attribute-in-compute support. It turns
out the only way in which shader stages are allowed to differ in their
interfaces is in the number of components per vector; the base type must
be the same. Since we are using the raw buffer instead of attributes, we
can now also emit arrays and matrices directly into the buffer, instead
of flattening them and then unpacking them. Structs are still flattened,
however; this is due to the need to handle vectors with fewer components
than were output, and I think handling this while also directly emitting
structs could get ugly.
Another advantage of this scheme is that the extra invocations needed to
read the attributes when there were more input than output points are
now no more. The number of threads per workgroup is now lcm(SIMD-size,
output control points). This should ensure we always process a whole
number of patches per workgroup.
To avoid complexity handling indices in the tessellation control shader,
I've also changed the way vertex shaders for tessellation are handled.
They are now compute kernels using Metal's support for vertex-style
stage input. This lets us always emit vertices into the buffer in order
of vertex shader execution. Now we no longer have to deal with indexing
in the tessellation control shader. This also fixes a long-standing
issue where if an index were greater than the number of vertices to
draw, the vertex shader would wind up writing outside the buffer, and
the vertex would be lost.
This is a breaking change, and I know SPIRV-Cross has other clients, so
I've hidden this behind an option for now. In the future, I want to
remove this option and make it the default.
MSL does not support this, so we have to emulate it by passing it around
as a varying between stages. We use a special "user(clipN)" attribute
for this rather than locN which is used for user varyings.
To support loading array of array properly in tessellation, we need a
rewrite of how tessellation access chains are handled.
The major change is to remove the implicit unflatten step inside
access_chain which does not take into account the case where you load
directly from a control point array variable.
We defer unflatten step until OpLoad time instead.
This fixes cases where we load array of {array,matrix,struct}.
Removes the hacky path for MSL access chain index workaround.
We used to use the Binding decoration for this, but this method is
hopelessly broken. If no explicit MSL resource remapping exists, we
remap automatically in a manner which should always "just work".
Return after loading the input control point array if there are more
input points than output points, and this was one of the helper
invocations spun off to load the input points. I was hesitant to do this
initially, since the MSL spec has this to say about barriers:
> The `threadgroup_barrier` (or `simdgroup_barrier`) function must be
> encountered by all threads in a threadgroup (or SIMD-group) executing
> the kernel.
That is, if any thread executes the barrier, then all threads must
execute it, or the barrier'd invocations will hang. But, the key words
here seem to be "executing the kernel;" inactive invocations, those that
have already returned, need not encounter the barrier to prevent hangs.
Indeed, I've encountered no problems from doing this, at least on my
hardware. This also fixes a few CTS tests that were failing due to
execution ordering; apparently, my assumption that the later, invalid
data written by the helpers would get overwritten was wrong.
This is intended to be used to support `VK_KHR_maintenance2`'s
tessellation domain origin feature. If `tess_domain_origin_lower_left`
is `true`, the `v` coordinate will be inverted with respect to the
domain. Additionally, in `Triangles` mode, the `v` and `w` coordinates
will be swapped. This is because the winding order is interpreted
differently in lower-left mode.
These are mapped to Metal's post-tessellation vertex functions. The
semantic difference is much less here, so this change should be simpler
than the previous one. There are still some hairy parts, though.
In MSL, the array of control point data is represented by a special
type, `patch_control_point<T>`, where `T` is a valid stage-input type.
This object must be embedded inside the patch-level stage input. For
this reason, I've added a new type to the type system to represent this.
On Mac, the number of input control points to the function must be
specified in the `patch()` attribute. This is optional on iOS.
SPIRV-Cross takes this from the `OutputVertices` execution mode; the
intent is that if it's not set in the shader itself, MoltenVK will set
it from the tessellation control shader. If you're translating these
offline, you'll have to update the control point count manually, since
this number must match the number that is passed to the
`drawPatches:...` family of methods.
Fixes#120.
This should fix a whole host of issues related to structs in the `Input`
class in a tessellation control shader.
Also, use pointer arithmetic instead of dereferencing the `ops` array.
This is critical in case we wind up stepping beyond the bounds of the
array.
There's no need to do so, since these are not stage-out structs being
returned, but regular structures being written to a buffer. This also
neatly avoids issues writing to composite (e.g. arrayed) per-patch
outputs from a tessellation control shader.
These are transpiled to kernel functions that write the output of the
shader to three buffers: one for per-vertex varyings, one for per-patch
varyings, and one for the tessellation levels. This structure is
mandated by the way Metal works, where the tessellation factors are
supplied to the draw method in their own buffer, while the per-patch and
per-vertex varyings are supplied as though they were vertex attributes;
since they have different step rates, they must be in separate buffers.
The kernel is expected to be run in a workgroup whose size is the
greater of the number of input or output control points. It uses Metal's
support for vertex-style stage input to a compute shader to get the
input values; therefore, at least one instance must run per input point.
Meanwhile, Vulkan mandates that it run at least once per output point.
Overrunning the output array is a concern, but any values written should
either be discarded or overwritten by subsequent patches. I'm probably
going to put some slop space in the buffer when I integrate this into
MoltenVK to be on the safe side.
Don't use `addsat()`/`subsat()`; that'll erroneously flag cases where
the sum is exactly the maximum integer value, or the difference is
exactly 0. Also, correct the condition for the `select()` function; it's
basically `mix()` with a boolean factor.
(What was I *thinking*?)
Support flattening StorageOutput & StorageInput matrices and arrays.
No longer move matrix & array inputs to separate buffer.
Add separate SPIRFunction::fixup_statements_in & SPIRFunction::fixup_statements_out
instead of just SPIRFunction::fixup_statements.
Emit SPIRFunction::fixup_statements at beginning of functions.
CompilerMSL track vars_needing_early_declaration.
Pass global output variables as variables to functions that access them.
Sort input structs by location, same as output structs.
Emit struct declarations in order output, input, uniforms.
Regenerate reference shaders to new formats defined by above.
Add bool members is_read and is_written to SPIRType::Image.
Output correct texture read/write access by marking whether textures
are read from and written to by the shader.
Override bitcast_glsl_op() to use Metal as_type<type> functions.
Add implementations of SPIR-V functions inverse(), degrees() & radians().
Map inverseSqrt() to rsqrt().
Map roundEven() to rint().
GLSL functions imageSize() and textureSize() map to equivalent
expression using MSL get_width() & get_height() functions.
Map several SPIR-V integer bitfield functions to MSL equivalents.
Map SPIR-V atomic functions to MSL equivalents.
Map texture packing and unpacking functions to MSL equivalents.
Refactor existing, and add new, image query functions.
Reorganize header lines into includes and pragmas.
Simplify type_to_glsl() logic.
Add MSL test case vert/functions.vert for added function implementations.
Add MSL test case comp/atomic.comp for added function implementations.
test_shaders.py use macOS compilation for MSL shader compilation validations.