Commit Graph

15 Commits

Author SHA1 Message Date
Bill Hollings
0c0fd98322 MSL: Use var name instead of var-type name for flattened interface members.
This allows two variables of the same struct type to be flattened
into the same interface struct without a member name conflict.

Add shaders-msl/frag/in_block_with_multiple_structs_of_same_type.frag
unit test shader to demonstrate this.
2022-03-04 11:38:53 -05:00
Lukas Taparauskas
72a2ec4c1b
MSL: Fix '--msl-multi-patch-workgroup' out of bounds reads when dispatching more threads than control points (#1662)
* Fix '--msl-multi-patch-workgroup' cases where thread count exceeds data bounds

*Fix gl_PrimitiveID off by one error when computing last valid index
*Point gl_out to the last patch's data when threads exceed input data bounds
*Point patchOut to the last patch's data when threads exceed input data bounds

* Update MSL test expectations.

* Undo change to MSL multi-patch hull output bound checks

* Update MSL multi-patch test expectations.
2021-04-29 20:01:26 +02:00
Hans-Kristian Arntzen
46c48ee6b5 MSL: Rewrite how IO blocks are emitted in multi-patch mode.
Firstly, never flatten inputs or outputs in multi-patch mode.
The main scenario where we do need to care is Block IO.
In this case, we should only flatten the top-level member, and after
that we use access chains as normal.

Using structs in Input storage class is now possible as well. We don't
need to consider per-location fixups at all here. In Vulkan, IO structs
must match exactly. Only plain vectors can have smaller vector sizes as
a special case.
2021-04-19 12:10:49 +02:00
Chip Davis
688c5fcbda MSL: Add support for processing more than one patch per workgroup.
This should hopefully reduce underutilization of the GPU, especially on
GPUs where the thread execution width is greater than the number of
control points.

This also simplifies initialization by reading the buffer directly
instead of using Metal's vertex-attribute-in-compute support. It turns
out the only way in which shader stages are allowed to differ in their
interfaces is in the number of components per vector; the base type must
be the same. Since we are using the raw buffer instead of attributes, we
can now also emit arrays and matrices directly into the buffer, instead
of flattening them and then unpacking them. Structs are still flattened,
however; this is due to the need to handle vectors with fewer components
than were output, and I think handling this while also directly emitting
structs could get ugly.

Another advantage of this scheme is that the extra invocations needed to
read the attributes when there were more input than output points are
now no more. The number of threads per workgroup is now lcm(SIMD-size,
output control points). This should ensure we always process a whole
number of patches per workgroup.

To avoid complexity handling indices in the tessellation control shader,
I've also changed the way vertex shaders for tessellation are handled.
They are now compute kernels using Metal's support for vertex-style
stage input. This lets us always emit vertices into the buffer in order
of vertex shader execution. Now we no longer have to deal with indexing
in the tessellation control shader. This also fixes a long-standing
issue where if an index were greater than the number of vertices to
draw, the vertex shader would wind up writing outside the buffer, and
the vertex would be lost.

This is a breaking change, and I know SPIRV-Cross has other clients, so
I've hidden this behind an option for now. In the future, I want to
remove this option and make it the default.
2020-07-23 17:59:54 -05:00
Hans-Kristian Arntzen
e1acbd3dcf MSL: Declare struct type explicitly.
Disambiguates initializer list.
2019-10-26 16:21:46 +02:00
Hans-Kristian Arntzen
27d6d45671 MSL: Rewrite tessellation_access_chain.
To support loading array of array properly in tessellation, we need a
rewrite of how tessellation access chains are handled.

The major change is to remove the implicit unflatten step inside
access_chain which does not take into account the case where you load
directly from a control point array variable.

We defer unflatten step until OpLoad time instead.
This fixes cases where we load array of {array,matrix,struct}.

Removes the hacky path for MSL access chain index workaround.
2019-10-26 16:10:12 +02:00
Lukas Hermanns
7ad0a84778 Updates for pull request #1162 2019-09-24 14:35:25 -04:00
Lukas Hermanns
cb3ecb9e1b Updated reference Metal shaders. 2019-09-17 15:11:19 -04:00
Mark Satterthwaite
564cb3c08d Update the Metal shaders to account for changes in the shader compilation. 2019-09-11 15:06:05 -04:00
Thomas Roughton
91b2f34a3d Update tests to account for all non-entry-point functions being inlined 2019-08-30 09:39:06 +12:00
Michael Barriault
d6754c5713 Fix tests for device->constant address space change in MSL tessellation control shader generation. 2019-04-10 18:37:04 +01:00
Chip Davis
a43dcd7b99 MSL: Return early from helper tesc invocations.
Return after loading the input control point array if there are more
input points than output points, and this was one of the helper
invocations spun off to load the input points. I was hesitant to do this
initially, since the MSL spec has this to say about barriers:

> The `threadgroup_barrier` (or `simdgroup_barrier`) function must be
> encountered by all threads in a threadgroup (or SIMD-group) executing
> the kernel.

That is, if any thread executes the barrier, then all threads must
execute it, or the barrier'd invocations will hang. But, the key words
here seem to be "executing the kernel;" inactive invocations, those that
have already returned, need not encounter the barrier to prevent hangs.
Indeed, I've encountered no problems from doing this, at least on my
hardware. This also fixes a few CTS tests that were failing due to
execution ordering; apparently, my assumption that the later, invalid
data written by the helpers would get overwritten was wrong.
2019-02-24 12:17:47 -06:00
Chip Davis
13df78bebf Unflatten inputs when copying to outputs.
This should fix a whole host of issues related to structs in the `Input`
class in a tessellation control shader.

Also, use pointer arithmetic instead of dereferencing the `ops` array.
This is critical in case we wind up stepping beyond the bounds of the
array.
2019-02-13 12:37:24 -06:00
Chip Davis
0bb6bbda22 Never flatten outputs when capturing them.
There's no need to do so, since these are not stage-out structs being
returned, but regular structures being written to a buffer. This also
neatly avoids issues writing to composite (e.g. arrayed) per-patch
outputs from a tessellation control shader.
2019-02-11 17:18:54 -06:00
Chip Davis
eb89c3a428 MSL: Add support for tessellation control shaders.
These are transpiled to kernel functions that write the output of the
shader to three buffers: one for per-vertex varyings, one for per-patch
varyings, and one for the tessellation levels. This structure is
mandated by the way Metal works, where the tessellation factors are
supplied to the draw method in their own buffer, while the per-patch and
per-vertex varyings are supplied as though they were vertex attributes;
since they have different step rates, they must be in separate buffers.

The kernel is expected to be run in a workgroup whose size is the
greater of the number of input or output control points. It uses Metal's
support for vertex-style stage input to a compute shader to get the
input values; therefore, at least one instance must run per input point.
Meanwhile, Vulkan mandates that it run at least once per output point.
Overrunning the output array is a concern, but any values written should
either be discarded or overwritten by subsequent patches. I'm probably
going to put some slop space in the buffer when I integrate this into
MoltenVK to be on the safe side.
2019-02-07 08:51:22 -06:00