Commit Graph

40 Commits

Author SHA1 Message Date
Chip Davis
68f0257f26 Use --preserve-numeric-ids when assembling test shaders.
This makes it easier to debug codegen for these shaders.
2023-06-23 14:54:16 -07:00
Chip Davis
c7ce92a95b MSL: Manually update BuiltInHelperInvocation when a fragment is discarded.
Some Metal devices have a bug where `simd_is_helper_thread()` won't
return true after a fragment has been discarded. We can work around this
by manually setting `gl_HelperInvocation` upon discarding a fragment.
This is fairly unintrusive, so it is enabled by default. I've made it an
option so that, when the bug is fixed, we can disable it.
2022-11-19 23:48:26 -08:00
Chip Davis
2219c4a392 MSL: Support SPV_EXT_demote_to_helper_invocation for MSL 2.3.
MSL 2.3 has everything needed to support this extension on all
platforms. The existing `discard_fragment()` function was given demote
semantics, similar to Direct3D, and the `simd_is_helper_thread()`
function was finally added to iOS.

I've left the old test alone. Should I remove it in favor of these?
2020-10-13 00:25:32 -05:00
Chip Davis
cab7335e64 MSL: Don't set the layer for multiview if the device doesn't support it.
Some older iOS devices don't support layered rendering. In that case,
don't set `[[render_target_array_index]]`, because the compiler will
reject the shader in that case. The client will then have to unroll the
render pass manually.
2020-09-01 19:30:28 -05:00
Chip Davis
53080ecca8 MSL: Fix multiview view index calculation with a non-zero base instance.
Account for a non-zero base instance when calculating the view index and
the "real" instance index. Before, it was likely broken with a non-zero
base instance, since the calculated instance index could be less than
the base instance.
2020-08-31 20:33:44 -05:00
Dan Sinclair
d409210ee5 Move all .invalid shaders into no-opt folders. 2019-11-05 13:19:19 -05:00
Hans-Kristian Arntzen
d1479f871a MSL: Do not generate UnsafeArray<> for any array inside buffer objects.
This avoids a lot of huge code changes.
Arrays generally cannot be copied in and out of buffers, at least no
compiler frontend seems to do it.

Also avoids a lot of issues surrounding packed vectors and matrices.
2019-10-24 12:22:30 +02:00
Lukas Hermanns
b0d616aa6d Removed 'argument_buffer_offset' and fixed packed matrix Metal output. 2019-10-23 16:28:32 -04:00
Lukas Hermanns
0853bcaee1 Disabled spvUnsafeArray<> type for packed vectors and added test cases for those arrays. 2019-10-09 17:59:47 -04:00
Lukas Hermanns
c3d6022956 Update for pull request #1162 rev. 1 2019-09-24 18:13:04 -04:00
Lukas Hermanns
7ad0a84778 Updates for pull request #1162 2019-09-24 14:35:25 -04:00
Lukas Hermanns
cb3ecb9e1b Updated reference Metal shaders. 2019-09-17 15:11:19 -04:00
Mark Satterthwaite
564cb3c08d Update the Metal shaders to account for changes in the shader compilation. 2019-09-11 15:06:05 -04:00
Hans-Kristian Arntzen
07c76f66b5 MSL: Add {Base,}{Vertex,Instance}Index to bitcast_from_builtin_load.
Totally missed these, so float(index) would not work correctly for
negative numbers.
2019-08-29 13:56:37 +02:00
Hans-Kristian Arntzen
c62503bca7 Do not attempt to pack types which are already scalar. 2019-07-24 11:52:28 +02:00
Hans-Kristian Arntzen
6057ffcbb1 Deal correctly with complete stores to row_major matrices. 2019-07-22 15:49:17 +02:00
Hans-Kristian Arntzen
180a6b38c5 Fix some row-major column store cases. 2019-07-22 12:56:14 +02:00
Hans-Kristian Arntzen
be2fccd837 Tests run clean. 2019-07-22 10:23:39 +02:00
Hans-Kristian Arntzen
6c1f97b4a9 Fix unpacking of packed but not remapped types on load. 2019-07-19 14:50:35 +02:00
Hans-Kristian Arntzen
b66a53a979 Traverse correct types when checking scalar layout. 2019-07-19 14:43:42 +02:00
Chip Davis
12a8654784 Don't forward uses of an OpIsHelperInvocationEXT op.
If this is computed *before* a `demote`, but used *after*, forwarding it
will produce the wrong value. This does make for uglier shaders, but
it's necessary right now to ensure correctness.

I needed to use an assembly shader to produce the test for this.
`spirv-opt` is not smart enough (or too smart?) to eliminate the
variable that would be used in GLSL to express this.
2019-07-18 17:32:35 -05:00
Chip Davis
50dce10c5d Support the SPV_EXT_demote_to_helper_invocation extension.
This extension provides a new operation which causes a fragment to be
discarded without terminating the fragment shader invocation. The
invocation for the discarded fragment becomes a helper invocation, so
that derivatives will remain defined. The old `HelperInvocation` builtin
becomes undefined when this occurs, so a second new instruction queries
the current helper invocation status.

This is only fully supported for GLSL. HLSL doesn't support the
`IsHelperInvocation` operation and MSL doesn't support the
`DemoteToHelperInvocation` op.

Fixes #1052.
2019-07-17 09:12:22 -05:00
Chip Davis
6a58554568 Support the SPV_KHR_device_group extension.
The only piece added by this extension is the `DeviceIndex` builtin,
which tells the shader which device in a grouped logical device it is
running on.

Metal's pipeline state objects are owned by the `MTLDevice` that created
them. Since Metal doesn't support logical grouping of devices the way
Vulkan does, we'll thus have to create a pipeline state for each device
in a grouped logical device. The upcoming peer group support in Metal 3
will not change this. For this reason, for Metal, the device index is
supplied as a constant at pipeline compile time.

There's an interaction between `VK_KHR_device_group` and
`VK_KHR_multiview` in the
`VK_PIPELINE_CREATE_VIEW_INDEX_FROM_DEVICE_INDEX_BIT`, which defines the
view index to be the same as the device index. The new
`view_index_from_device_index` MSL option supports this functionality.
2019-07-13 16:45:54 -05:00
Chip Davis
28454facbb MSL: Handle packed matrices.
The old method of using a different unpacked matrix type doesn't work
for scalar alignment. It certainly wouldn't have any effect for a square
matrix, since the number of columns and rows are the same. So now we'll
store them as arrays of packed vectors.
2019-07-10 18:37:31 -05:00
Chip Davis
e5fa7edfd6 MSL: Support scalar block layout.
Relaxed block layout relaxed the restrictions on vector alignment,
allowing them to be aligned on scalar boundaries. Scalar block layout
relaxes this further, allowing *any* member to be aligned on a scalar
boundary. The requirement that a vector not improperly straddle a
16-byte boundary is also relaxed.

I've also added a test showing that `std430` layout works with UBOs.

I'm troubled by the dual meaning of the `Packed` extended decoration. In
some instances (struct, `float[]`, and `vec2[]` members), it actually
means the exact opposite, that the member needs extra padding. This is
especially problematic for `vec2[]`, because now we need to distinguish
the two cases by checking the array stride. I wonder if this should
actually be split into two decorations.
2019-07-09 20:59:32 -05:00
Chip Davis
31b6c93516 MSL: Support SubgroupLocalInvocationId and SubgroupSize in all stages.
MSL prior to 2.2 doesn't support these natively in any stage but
compute. But, we can (assuming no threads were terminated prematurely)
get their values with some creative uses of the
`simd_prefix_exclusive_sum()` and `simd_sum()` functions.

Also, fix a missing `to_expression()` with `BuiltInSubgroupEqMask`.

For KhronosGroup/MoltenVK#629.
2019-07-02 11:48:59 -05:00
Chip Davis
7eecf5a46b MSL: Support SPV_KHR_multiview.
This is needed to support `VK_KHR_multiview`, which is in turn needed
for Vulkan 1.1 support. Unfortunately, Metal provides no native support
for this, and Apple is once again less than forthcoming, so we have to
implement it all ourselves.

Tessellation and geometry shaders are deliberately unsupported for now.
The problem is that the current implementation encodes the `ViewIndex`
as part of the `InstanceIndex`, which in the SPIR-V environment at least
only exists in the vertex shader. So we need to work out a way to pass
the view index along to the later stages.

This implementation runs vertex shaders for all views up to the highest
bit set in the view mask, even those whose bits are clear. The fragments
for the inactive views are then discarded. Avoiding this is difficult:
calculating the view indices becomes far more complicated if we can only
run for those views which are set in the mask.
2019-06-29 09:43:55 -05:00
Chip Davis
8983920edf Remove fallback for OpGroupNonUniformElect.
It's not safe to enable subgroup support without this actually working
correctly.
2019-05-16 13:42:09 -05:00
Chip Davis
9d9415754b MSL: Add support for subgroup operations.
Some support for subgroups is present starting in Metal 2.0 on both iOS
and macOS. macOS gains more complete support in 10.14 (Metal 2.1).

Some restrictions are present. On iOS and on macOS 10.13, the
implementation of `OpGroupNonUniformElect` is incorrect: if thread 0 has
already terminated or is not executing a conditional branch, the first
thread that *is* will falsely believe itself not to be. Unfortunately,
this operation is part of the "basic" feature set; without it, subgroups
cannot be supported at all.

The `SubgroupSize` and `SubgroupLocalInvocationId` builtins are only
available in compute shaders (and, by extension, tessellation control
shaders), despite SPIR-V making them available in all stages. This
limits the usefulness of some of the subgroup operations in fragment
shaders.

Although Metal on macOS supports some clustered, inclusive, and
exclusive operations, it does not support them all. In particular,
inclusive and exclusive min, max, and, or, and xor; as well as cluster
sizes other than 4 are not supported. If this becomes a problem, they
could be emulated, but at a significant performance cost due to the need
for non-uniform operations.
2019-05-15 17:40:04 -05:00
Chip Davis
1fb27b4cda Add support for 8- and 16-bit types to GLSL and MSL.
In GLSL, 8-bit types require GL_EXT_shader_8bit_storage. 16-bit types
can use either GL_AMD_gpu_shader_int16/GL_AMD_gpu_shader_half_float or
GL_EXT_shader_16bit_storage.
2018-11-01 10:20:57 -05:00
Hans-Kristian Arntzen
62db535b3f Update tests. 2018-11-01 11:23:48 +01:00
Hans-Kristian Arntzen
652d8263e5 Use correct spirv-cross version when testing MSL 1.1 shaders. 2018-09-07 09:45:25 +02:00
Hans-Kristian Arntzen
32823b0838 MSL: Do not emit function constants for version < 1.2. 2018-09-07 09:33:34 +02:00
Bill Hollings
9b4defe202 CompilerMSL support matrices & arrays in stage-in & stage-out.
Support flattening StorageOutput & StorageInput matrices and arrays.
No longer move matrix & array inputs to separate buffer.
Add separate SPIRFunction::fixup_statements_in & SPIRFunction::fixup_statements_out
instead of just  SPIRFunction::fixup_statements.
Emit SPIRFunction::fixup_statements at beginning of functions.
CompilerMSL track vars_needing_early_declaration.
Pass global output variables as variables to functions that access them.
Sort input structs by location, same as output structs.
Emit struct declarations in order output, input, uniforms.
Regenerate reference shaders to new formats defined by above.
2018-06-12 11:41:35 -04:00
Hans-Kristian Arntzen
991b655c72 Declare OpSpecConstantOp up-front on relevant targets.
Required, since spec constants can include results from constant ops.
2018-05-15 14:20:16 +02:00
Bill Hollings
1f83856366 CompilerMSL add support for MSL specialization function constants.
CompilerMSL add emit_custom_functions() function.
CompilerMSL restrict use of as_type<> cast to necessary conditions.
CompilerMSL refactor get_declared_struct_member_size() and
get_declared_struct_member_alignment() functions, and remove
unnecessary get_declared_type_size() functions.
Add test shaders-msl/vulkan/frag/spec-constant.vk.frag.
2017-06-15 15:24:22 -04:00
Bill Hollings
192bdc9516 CompilerMSL elide unused builtins from entry function input and output structs.
Add Compiler::has_active_builtin() function.
Update test reference shaders that included unused builtins.
2017-05-24 09:31:38 -04:00
Hans-Kristian Arntzen
91379fb0d0 Implement more sophisticated check for point_size. 2017-05-23 10:15:22 +02:00
Bill Hollings
f9f87ca391 CompilerMSL output [[point_size]] attribute for BuiltInPointSize member by default.
CompilerMSL::Options::is_rendering_points defaults to true.
2017-05-05 16:13:55 -04:00
Bill Hollings
be4cb17a14 Enhance MSL testing and add numerous MSL test cases.
Add to suite of MSL tests and references any existing GLSL tests
that successfully convert GLSL->SPIRV->MSL and compile as MSL.
test_shaders_helper() ignores hidden files that start with '.',
to avoid accidentally finding hidden OSX files such as .DS_Store.
Use xcrun to compile MSL shaders instead of hard-coded path to Metal compiler.
Wrap calls to xcrun in exception handling to ignore if Xcode not installed.
For MSL tests, move call to validate_shader_msl() to after call to
regression_check() to allow a converted MSL shader to be saved for
manual review even if it doesn't successfully compile as MSL.
2017-01-30 22:55:21 -05:00