Commit Graph

226 Commits

Author SHA1 Message Date
Chip Davis
103817009c MSL: Force storage images on iOS to use discrete descriptors.
Writable textures cannot use argument buffers on iOS. They must be
passed as arguments directly to the shader function. Since we won't know
if a given storage image will have the `NonWritable` decoration at the
time we encode the argument buffer, we must therefore pass all storage
images as discrete arguments. Previously, we were throwing an error if
we encountered an argument buffer with a writable texture in it on iOS.
2019-09-05 11:01:05 -05:00
Chip Davis
39dce88d3b MSL: Add support for sampler Y'CbCr conversion.
This change introduces functions and in one case, a class, to support
the `VK_KHR_sampler_ycbcr_conversion` extension. Except in the case of
GBGR8 and BGRG8 formats, for which Metal natively supports implicit
chroma reconstruction, we're on our own here. We have to do everything
ourselves. Much of the complexity comes from the need to support
multiple planes, which must now be passed to functions that use the
corresponding combined image-samplers. The rest is from the actual
Y'CbCr conversion itself, which requires additional post-processing of
the sample retrieved from the image.

Passing sampled images to a function was a particular problem. To
support this, I've added a new class which is emitted to MSL shaders
that pass sampled images with Y'CbCr conversions attached around. It
can handle sampled images with or without Y'CbCr conversion. This is an
awful abomination that should not exist, but I'm worried that there's
some shader out there which does this. This support requires Metal 2.0
to work properly, because it uses default-constructed texture objects,
which were only added in MSL 2. I'm not even going to get into arrays of
combined image-samplers--that's a whole other can of worms.  They are
deliberately unsupported in this change.

I've taken the liberty of refactoring the support for texture swizzling
while I'm at it. It's now treated as a post-processing step similar to
Y'CbCr conversion. I'd like to think this is cleaner than having
everything in `to_function_name()`/`to_function_args()`. It still looks
really hairy, though. I did, however, get rid of the explicit type
arguments to `spvGatherSwizzle()`/`spvGatherCompareSwizzle()`.

Update the C API. In addition to supporting this new functionality, add
some compiler options that I added in previous changes, but for which I
neglected to update the C API.
2019-09-01 18:35:53 -05:00
Hans-Kristian Arntzen
3ccfbce264 Run format_all.sh. 2019-08-28 14:25:26 +02:00
Hans-Kristian Arntzen
9436cd3036 MSL: Deal with array copies from and to threadgroup. 2019-08-27 13:18:01 +02:00
Chip Davis
df18d98bea MSL: Unify the get_*_address_space() methods.
These methods have largely the same logic, with minor differences. That
I felt compelled to duplicate the logic into another method was one of
the things that bothered me about the variable pointers change. This
cleans that part of the code up; now we don't have two places to change.
2019-07-26 09:43:28 -05:00
Chip Davis
fb5ee4cb5c MSL: Adjust BuiltInWorkgroupId for vkCmdDispatchBase().
This command allows the caller to set the base value of
`BuiltInWorkgroupId`, and thus of `BuiltInGlobalInvocationId`. Metal
provides no direct support for this... but it does provide a builtin,
`[[grid_origin]]`, normally used to pass the base values for the stage
input region, which we will now abuse to pass the dispatch base and
avoid burning a buffer binding.

`[[grid_origin]]`, as part of Metal's support for compute stage input,
requires MSL 1.2. For 1.0 and 1.1, we're forced to provide a buffer.

(Curiously, this builtin was undocumented until the MSL 2.2 release. Go
figure.)
2019-07-24 08:56:15 -05:00
Hans-Kristian Arntzen
5c1cb7accf Recursively pack struct types when we find scalar packed structs. 2019-07-23 15:24:53 +02:00
Hans-Kristian Arntzen
3fa2b14634 Run format_all.sh. 2019-07-23 12:23:41 +02:00
Hans-Kristian Arntzen
7277c7ac46 Use to_unpacked_row_major_expression to unify row-major in MSL/GLSL. 2019-07-23 11:36:54 +02:00
Hans-Kristian Arntzen
2172b19be2 Remove obsolete matrix workaround code. 2019-07-22 16:27:47 +02:00
Hans-Kristian Arntzen
14afb968dd Correctly unpack row-major matrices when storing to LHS. 2019-07-22 12:03:12 +02:00
Hans-Kristian Arntzen
be2fccd837 Tests run clean. 2019-07-22 10:23:39 +02:00
Hans-Kristian Arntzen
e90d816cdd Deal with scalar layout of entire structs.
Mark all candidate struct types.
2019-07-19 14:18:14 +02:00
Hans-Kristian Arntzen
12c5020854 Pass down row-major state to unpacking functions. 2019-07-19 13:03:08 +02:00
Hans-Kristian Arntzen
27b75c2c5a Deal with all forms of matrix writes ... 2019-07-19 12:53:10 +02:00
Hans-Kristian Arntzen
f6251e4699 Can deal with std140 matrices now.
Refactor is coming together.
2019-07-19 11:21:02 +02:00
Hans-Kristian Arntzen
b09b8d3fa9 Deal more cleanly with matrices and row-major. 2019-07-19 10:06:19 +02:00
Hans-Kristian Arntzen
c160d5227f Reintroduce struct_member_* MSL queries.
Need to remap to physical type + packed qualifier, and this is handy to
do in a helper function.
2019-07-19 10:06:19 +02:00
Hans-Kristian Arntzen
a86308bce1 MSL: Begin rewrite of buffer packing logic. 2019-07-19 10:06:19 +02:00
Hans-Kristian Arntzen
c7eda1bce9 Test glsl.std450 more exhaustively.
Make sure to test everything with scalar as well to catch any weird edge
cases.

Not all opcodes are covered here, just the arithmetic ones. FP64 packing
is also ignored.
2019-07-17 11:53:05 +02:00
Hans-Kristian Arntzen
33d2bbcf69 Merge branch 'msl-amd-trinary-functions' of git://github.com/cdavis5e/SPIRV-Cross 2019-07-15 09:46:31 +02:00
Chip Davis
6a58554568 Support the SPV_KHR_device_group extension.
The only piece added by this extension is the `DeviceIndex` builtin,
which tells the shader which device in a grouped logical device it is
running on.

Metal's pipeline state objects are owned by the `MTLDevice` that created
them. Since Metal doesn't support logical grouping of devices the way
Vulkan does, we'll thus have to create a pipeline state for each device
in a grouped logical device. The upcoming peer group support in Metal 3
will not change this. For this reason, for Metal, the device index is
supplied as a constant at pipeline compile time.

There's an interaction between `VK_KHR_device_group` and
`VK_KHR_multiview` in the
`VK_PIPELINE_CREATE_VIEW_INDEX_FROM_DEVICE_INDEX_BIT`, which defines the
view index to be the same as the device index. The new
`view_index_from_device_index` MSL option supports this functionality.
2019-07-13 16:45:54 -05:00
Chip Davis
ca91fcfe5f MSL: Support the SPV_AMD_shader_trinary_minmax extension.
This requires MSL 2.1.
2019-07-13 16:43:57 -05:00
Chip Davis
058f1a0933 MSL: Handle coherent, volatile, and restrict.
This maps them to their MSL equivalents. I've mapped `Coherent` to
`volatile` since MSL doesn't have anything weaker than `volatile` but
stronger than nothing.

As part of this, I had to remove the implicit `volatile` added for
atomic operation casts. If the buffer is already `coherent` or
`volatile`, then we would add a second `volatile`, which would be
redundant. I think this is OK even when the buffer *doesn't* have
`coherent`: `T *` is implicitly convertible to `volatile T *`, but not
vice-versa. It seems to compile OK at any rate. (Note that the
non-`volatile` overloads of the atomic functions documented in the spec
aren't present in the MSL 2.2 stdlib headers.)

`restrict` is tricky, because in MSL, as in C++, it needs to go *after*
the asterisk or ampersand for the pointer type it's modifying.

Another issue is that, in the `Simple`, `GLSL450`, and `Vulkan` memory
models, `Restrict` is the default (i.e. does not need to be specified);
but MSL likely follows the `OpenCL` model where `Aliased` is the
default. We probably need to implicitly set either `Restrict` or
`Aliased` depending on the module's declared memory model.
2019-07-11 10:22:30 -05:00
Chip Davis
28454facbb MSL: Handle packed matrices.
The old method of using a different unpacked matrix type doesn't work
for scalar alignment. It certainly wouldn't have any effect for a square
matrix, since the number of columns and rows are the same. So now we'll
store them as arrays of packed vectors.
2019-07-10 18:37:31 -05:00
Chip Davis
e5fa7edfd6 MSL: Support scalar block layout.
Relaxed block layout relaxed the restrictions on vector alignment,
allowing them to be aligned on scalar boundaries. Scalar block layout
relaxes this further, allowing *any* member to be aligned on a scalar
boundary. The requirement that a vector not improperly straddle a
16-byte boundary is also relaxed.

I've also added a test showing that `std430` layout works with UBOs.

I'm troubled by the dual meaning of the `Packed` extended decoration. In
some instances (struct, `float[]`, and `vec2[]` members), it actually
means the exact opposite, that the member needs extra padding. This is
especially problematic for `vec2[]`, because now we need to distinguish
the two cases by checking the array stride. I wonder if this should
actually be split into two decorations.
2019-07-09 20:59:32 -05:00
Hans-Kristian Arntzen
909040e2eb MSVC 2013: Work around another compiler bug with array init. 2019-07-09 15:31:01 +02:00
Hans-Kristian Arntzen
041f103d44 MSL/HLSL: Support scalar reflect and refract. 2019-07-03 12:31:52 +02:00
Chip Davis
7eecf5a46b MSL: Support SPV_KHR_multiview.
This is needed to support `VK_KHR_multiview`, which is in turn needed
for Vulkan 1.1 support. Unfortunately, Metal provides no native support
for this, and Apple is once again less than forthcoming, so we have to
implement it all ourselves.

Tessellation and geometry shaders are deliberately unsupported for now.
The problem is that the current implementation encodes the `ViewIndex`
as part of the `InstanceIndex`, which in the SPIR-V environment at least
only exists in the vertex shader. So we need to work out a way to pass
the view index along to the later stages.

This implementation runs vertex shaders for all views up to the highest
bit set in the view mask, even those whose bits are clear. The fragments
for the inactive views are then discarded. Avoiding this is difficult:
calculating the view indices becomes far more complicated if we can only
run for those views which are set in the mask.
2019-06-29 09:43:55 -05:00
Hans-Kristian Arntzen
c76b99b711 Handle more cases with FP16 and texture sampling. 2019-06-27 15:04:22 +02:00
Hans-Kristian Arntzen
45805857e5 MSL: De-virtualize get_declared_struct_member_size.
It does not make sense to use a virtual call in the Compiler base class
here. Make it clearer by renaming the MSL-specific version to _msl.
2019-06-26 19:11:38 +02:00
Hans-Kristian Arntzen
048f2380f3 MSL: Support custom bindings for argument buffer itself. 2019-06-24 11:10:20 +02:00
Hans-Kristian Arntzen
b4e0163749 Run format_all.sh. 2019-06-21 16:02:22 +02:00
Hans-Kristian Arntzen
3a4a9acac9 MSL: Add C API for querying automatic resource bindings. 2019-06-21 13:19:59 +02:00
Hans-Kristian Arntzen
e2c95bdcbc MSL: Rewrite how resource indices are fallback-assigned.
We used to use the Binding decoration for this, but this method is
hopelessly broken. If no explicit MSL resource remapping exists, we
remap automatically in a manner which should always "just work".
2019-06-21 12:54:08 +02:00
Hans-Kristian Arntzen
7fdb418f18
Merge pull request #1028 from KhronosGroup/fix-1010
MSL: Support barycentrics and PrimitiveID in fragment shaders
2019-06-19 15:29:14 +02:00
Hans-Kristian Arntzen
2e1cee5e1e MSL: Support PrimitiveID in fragment and barycentrics. 2019-06-19 09:52:35 +02:00
Hans-Kristian Arntzen
f171d82590 MSL: Support MinLod operand. 2019-06-19 09:43:03 +02:00
Hans-Kristian Arntzen
30bb197a5d MSL: Support remapping constexpr samplers by set/binding.
Older API was oriented around IDs which are not available unless you're
doing full reflection, which is awkward for certain use cases which know
their set/bindings up front.

Optimize resource bindings to be hashmap rather than doing linear seeks
all the time.
2019-06-10 15:41:36 +02:00
Hans-Kristian Arntzen
314efdcc42 MSL: Fix declaration of unused input variables.
In multiple-entry-point modules, we declared builtin inputs which were
not supposed to be used for that entry point.

Fix this, by being more strict when checking which builtins to emit.
2019-05-31 13:23:34 +02:00
Hans-Kristian Arntzen
7b9e0fb428 MSL: Implement OpArrayLength.
This gets rather complicated because MSL does not support OpArrayLength
natively. We need to pass down a buffer which contains buffer sizes, and
we compute the array length on-demand.

Support both discrete descriptors as well as argument buffers.
2019-05-27 16:13:09 +02:00
Hans-Kristian Arntzen
eaf7afed97 MSL: Support argument buffers and image swizzling.
Change aux buffer to swizzle buffer.
There is no good reason to expand the aux buffer, so name it
appropriately.

Make the code cleaner by emitting a straight pointer to uint rather than
a dummy struct which only contains a single unsized array member anyways.

This will also end up being very similar to how we implement swizzle
buffers for argument buffers.

Do not use implied binding if it overflows int32_t.
2019-05-18 10:30:06 +02:00
Chip Davis
9d9415754b MSL: Add support for subgroup operations.
Some support for subgroups is present starting in Metal 2.0 on both iOS
and macOS. macOS gains more complete support in 10.14 (Metal 2.1).

Some restrictions are present. On iOS and on macOS 10.13, the
implementation of `OpGroupNonUniformElect` is incorrect: if thread 0 has
already terminated or is not executing a conditional branch, the first
thread that *is* will falsely believe itself not to be. Unfortunately,
this operation is part of the "basic" feature set; without it, subgroups
cannot be supported at all.

The `SubgroupSize` and `SubgroupLocalInvocationId` builtins are only
available in compute shaders (and, by extension, tessellation control
shaders), despite SPIR-V making them available in all stages. This
limits the usefulness of some of the subgroup operations in fragment
shaders.

Although Metal on macOS supports some clustered, inclusive, and
exclusive operations, it does not support them all. In particular,
inclusive and exclusive min, max, and, or, and xor; as well as cluster
sizes other than 4 are not supported. If this becomes a problem, they
could be emulated, but at a significant performance cost due to the need
for non-uniform operations.
2019-05-15 17:40:04 -05:00
Hans-Kristian Arntzen
fc4f39b11f MSL: Support native texture_buffer type, throw error on atomics.
Atomics are not supported on images or texture_buffers in MSL.
Properly throw an error if OpImageTexelPointer is used (since it can
only be used for atomic operations anyways).
2019-04-23 12:21:43 +02:00
Hans-Kristian Arntzen
3fe57d3798 Do not use SmallVector as input type in public interfaces.
This is an API break, which we need to be careful with.
Handing out SmallVectors is easier since the interface is basically the
same.
2019-04-09 15:09:44 +02:00
Hans-Kristian Arntzen
a489ba7fd1 Reduce pressure on global allocation.
- Replace ostringstream with custom implementation.
  ~30% performance uplift on vector-shuffle-oom test.
  Allocations are measurably reduced in Valgrind.

- Replace std::vector with SmallVector.
  Classic malloc optimization, small vectors are backed by inline data.
  ~ 7-8% gain on vector-shuffle-oom on GCC 8 on Linux.

- Use an object pool for IVariant type.
  We generally allocate a lot of SPIR* objects. We can amortize these
  allocations neatly by pooling them.

- ~15% overall uplift on ./test_shaders.py --iterations 10000 shaders/.
2019-04-09 15:09:44 +02:00
Hans-Kristian Arntzen
23db744e35 Deal with case where we need to emit SpvImplArrayCopy late.
We cannot deduce if OpLoad needs ArrayCopy templates early since it's
heavily context dependent, and we might only know on 3rd iteration of
the compile loop.
2019-04-09 12:28:46 +02:00
Hans-Kristian Arntzen
9b92e68d71 Add an option to override the namespace used for spirv_cross.
This is a pragmatic trick to avoid symbol collision where a project
links against SPIRV-Cross statically, while linking to other projects
which also use SPIRV-Cross statically. We can end up with very awkward
symbol collisions which can resolve themselves silently because
SPIRV-Cross is pulled in as necessary. To fix this, we must use
different symbols and embed two copies of SPIRV-Cross in this scenario,
now with different namespaces, which in turn leads to different symbols.
2019-03-29 10:29:44 +01:00
Hans-Kristian Arntzen
e2aadf8995 Rename "push descriptor set" to "discrete descriptor set".
Check for case where iOS doesn't support writable argument buffer
textures.
2019-03-15 21:53:21 +01:00
Hans-Kristian Arntzen
b3380ec9dd MSL: Support VK_KHR_push_descriptor.
If we have argument buffers, we also need to support using plain
descriptor sets for certain cases where API wants it.
2019-03-15 14:08:47 +01:00