This gets rather complicated because MSL does not support OpArrayLength
natively. We need to pass down a buffer which contains buffer sizes, and
we compute the array length on-demand.
Support both discrete descriptors as well as argument buffers.
MSL generally emits the aliases, which means we cannot always place the
master type first, unlike GLSL and HLSL. The logic fix is just to
reorder after we have tagged types with packing information, rather than
doing it in the parser fixup.
Change aux buffer to swizzle buffer.
There is no good reason to expand the aux buffer, so name it
appropriately.
Make the code cleaner by emitting a straight pointer to uint rather than
a dummy struct which only contains a single unsized array member anyways.
This will also end up being very similar to how we implement swizzle
buffers for argument buffers.
Do not use implied binding if it overflows int32_t.
Some support for subgroups is present starting in Metal 2.0 on both iOS
and macOS. macOS gains more complete support in 10.14 (Metal 2.1).
Some restrictions are present. On iOS and on macOS 10.13, the
implementation of `OpGroupNonUniformElect` is incorrect: if thread 0 has
already terminated or is not executing a conditional branch, the first
thread that *is* will falsely believe itself not to be. Unfortunately,
this operation is part of the "basic" feature set; without it, subgroups
cannot be supported at all.
The `SubgroupSize` and `SubgroupLocalInvocationId` builtins are only
available in compute shaders (and, by extension, tessellation control
shaders), despite SPIR-V making them available in all stages. This
limits the usefulness of some of the subgroup operations in fragment
shaders.
Although Metal on macOS supports some clustered, inclusive, and
exclusive operations, it does not support them all. In particular,
inclusive and exclusive min, max, and, or, and xor; as well as cluster
sizes other than 4 are not supported. If this becomes a problem, they
could be emulated, but at a significant performance cost due to the need
for non-uniform operations.
MSL does not seem to have a qualifier for this, but HLSL SM 5.1 does.
glslangValidator for HLSL does not support this, so skip any validation,
but it passes in FXC.
Buffer objects can contain arbitrary pointers to blocks.
We can also implement ConvertPtrToU and ConvertUToPtr.
The latter can cast a uint64_t to any type as it pleases,
so we will need to generate fake buffer reference blocks to be able to
cast the type.
Atomics are not supported on images or texture_buffers in MSL.
Properly throw an error if OpImageTexelPointer is used (since it can
only be used for atomic operations anyways).
* origin/master:
Support running {,update_}test_shader.sh with CMake builds.
Don't apply vertex attribute remapping other non-vertex or non-input interface blocks
Force complex loop in certain rare access chain scenarios.
Fix guard around [[noreturn]].
Deal with mismatched signs in S/U/F conversion opcodes.
Workaround lack of lvalue/rvalue operator overload on MSVC 2013.
Support direct conversions to std::vector from SmallVector.
Fix some minor copy constructor issues in Variant.
Make sure ids_for_types are moved correctly in move operator.
Run format_all.sh.
Refactor out error handling and containers to new headers.
Do not use SmallVector as input type in public interfaces.
Fix various bugs found in testing.
Explicitly implement move operators for ParsedIR.
Try another MSVC 2013 workaround.
Implement edge cases in insert/end and add a simple test case.
Fix GCC 4.x warnings.
Workaround lack of alignas on MSVC 2013.
Reduce pressure on global allocation.
CLI: Make --iterations more useful.
- Replace ostringstream with custom implementation.
~30% performance uplift on vector-shuffle-oom test.
Allocations are measurably reduced in Valgrind.
- Replace std::vector with SmallVector.
Classic malloc optimization, small vectors are backed by inline data.
~ 7-8% gain on vector-shuffle-oom on GCC 8 on Linux.
- Use an object pool for IVariant type.
We generally allocate a lot of SPIR* objects. We can amortize these
allocations neatly by pooling them.
- ~15% overall uplift on ./test_shaders.py --iterations 10000 shaders/.
We cannot deduce if OpLoad needs ArrayCopy templates early since it's
heavily context dependent, and we might only know on 3rd iteration of
the compile loop.
We had a bug where error conditions in DoWhileLoop emit path would not
detect that statements were being emitted due to the masking behavior
which happens when force_recompile is true. Fix this.
Also, refactor force_recompile into member functions so we can properly
break on any situation where this is set, without having to rely on
watchpoints in debuggers.
This is a pragmatic trick to avoid symbol collision where a project
links against SPIRV-Cross statically, while linking to other projects
which also use SPIRV-Cross statically. We can end up with very awkward
symbol collisions which can resolve themselves silently because
SPIRV-Cross is pulled in as necessary. To fix this, we must use
different symbols and embed two copies of SPIRV-Cross in this scenario,
now with different namespaces, which in turn leads to different symbols.
The design of backend.int16_t_literal_suffix and backend.uint16_t_literal_suffix
allows them to be set to null, but that was not always tested for.
I have removed the expectation that they can be null and set
backend.int16_t_literal_suffix to "" when no suffix is needed.
That has the same effect, and seemed to be a more usable and defensive approach.
Avoids ugly warnings on nearly every compute shader.
We could do analysis to detect whether we need to emit this constant,
but it's a bit tedious to figure out if an OpConstantComponent is
actually used by opcodes, so just make it simple.
This adds a new C API for SPIRV-Cross which is intended to be stable,
both API and ABI wise.
The C++ API has been refactored a bit to make the C wrapper easier and
cleaner to write. Especially the vertex attribute / resource interfaces
for MSL has been rewritten to avoid taking mutable pointers into the
interface. This would be very annoying to wrap and it didn't fit well
with the rest of the C++ API to begin with. While doing this, I went
ahead and removed all the old deprecated interfaces.
The CMake build system has also seen an overhaul.
It is now possible to build static/shared/CLI separately with -D
options.
The shared library only exposes the C API, as it is the only ABI-stable
API. pkg-configs as well as CMake modules are exported and installed for
the shared library configuration.
We were using std::locale::global() to force a C locale which is not
safe when SPIRV-Cross is used in a multi-threaded environment.
To fix this, we could tap into various per-platform specific locale
handling to get safe thread-local locales, but since locales only affect
the decimal point in floats, we simply query the locale instead and do
the necessary radix replacement ourselves, without touching the locale.
This should be much safer and cleaner than the alternative.
Return after loading the input control point array if there are more
input points than output points, and this was one of the helper
invocations spun off to load the input points. I was hesitant to do this
initially, since the MSL spec has this to say about barriers:
> The `threadgroup_barrier` (or `simdgroup_barrier`) function must be
> encountered by all threads in a threadgroup (or SIMD-group) executing
> the kernel.
That is, if any thread executes the barrier, then all threads must
execute it, or the barrier'd invocations will hang. But, the key words
here seem to be "executing the kernel;" inactive invocations, those that
have already returned, need not encounter the barrier to prevent hangs.
Indeed, I've encountered no problems from doing this, at least on my
hardware. This also fixes a few CTS tests that were failing due to
execution ordering; apparently, my assumption that the later, invalid
data written by the helpers would get overwritten was wrong.
If a stage takes the position as both an input and an output (i.e. a
tessellation shader or a geometry shader), then we could wind up fixing
up the input position by mistake. Ensure that doesn't happen, by only
setting the `qual_pos_var_name` variable from the output position.
The tessellation levels in Metal are stored as a densely-packed array of
half-precision floating point values. But, stage-in attributes in Metal
have to have offsets and strides aligned to a multiple of four, so we
can't add them individually. Luckily for us, the arrays have lengths
less than 4. So, let's use vectors for them!
Triangles get a single attribute with a `float4`, where the outer levels
are in `.xyz` and the inner levels are in `.w`. The arrays are unpacked
as though we had added the elements individually. Quads get two: a
`float4` with the outer levels and a `float2` with the inner levels.
Further, since vectors can be indexed as arrays, there's no need to
unpack them in this case.
This also saves on precious vertex attributes. Before, we were using up
to 6 of them. Now we need two at most.
These are often arrayed builtins, which MSL maps to more than one
attribute. SPIRV-Cross automatically assigns succeeding addresses to
arrayed attributes, so we really only need the first one. This of course
assumes that the inputs are sorted by location.
Builtin attributes in SPIR-V aren't linked by location, but by their
built-in-ness. This poses a problem for MSL, since builtin inputs in
the vertex pipeline are just regular attributes. We must then assign
them locations so that they can be matched up to the attributes in the
stage input descriptor--and also to avoid duplicate attribute numbers in
tessellation evaluation shaders, where there are two different
stage-in structs, so the member index therein is no longer unique!
In SPIR-V, there are always two inner levels and four outer levels, even
if the input patch isn't a quad patch. But in MSL, due to requirements
imposed by Metal, only one inner level and three outer levels exist when
the input patch is a triangle patch. We must explicitly ignore any write
to the nonexistent second inner and fourth outer levels in this case.
This is intended to be used to support `VK_KHR_maintenance2`'s
tessellation domain origin feature. If `tess_domain_origin_lower_left`
is `true`, the `v` coordinate will be inverted with respect to the
domain. Additionally, in `Triangles` mode, the `v` and `w` coordinates
will be swapped. This is because the winding order is interpreted
differently in lower-left mode.
These are mapped to Metal's post-tessellation vertex functions. The
semantic difference is much less here, so this change should be simpler
than the previous one. There are still some hairy parts, though.
In MSL, the array of control point data is represented by a special
type, `patch_control_point<T>`, where `T` is a valid stage-input type.
This object must be embedded inside the patch-level stage input. For
this reason, I've added a new type to the type system to represent this.
On Mac, the number of input control points to the function must be
specified in the `patch()` attribute. This is optional on iOS.
SPIRV-Cross takes this from the `OutputVertices` execution mode; the
intent is that if it's not set in the shader itself, MoltenVK will set
it from the tessellation control shader. If you're translating these
offline, you'll have to update the control point count manually, since
this number must match the number that is passed to the
`drawPatches:...` family of methods.
Fixes#120.
This should fix a whole host of issues related to structs in the `Input`
class in a tessellation control shader.
Also, use pointer arithmetic instead of dereferencing the `ops` array.
This is critical in case we wind up stepping beyond the bounds of the
array.
There's no need to do so, since these are not stage-out structs being
returned, but regular structures being written to a buffer. This also
neatly avoids issues writing to composite (e.g. arrayed) per-patch
outputs from a tessellation control shader.
These are transpiled to kernel functions that write the output of the
shader to three buffers: one for per-vertex varyings, one for per-patch
varyings, and one for the tessellation levels. This structure is
mandated by the way Metal works, where the tessellation factors are
supplied to the draw method in their own buffer, while the per-patch and
per-vertex varyings are supplied as though they were vertex attributes;
since they have different step rates, they must be in separate buffers.
The kernel is expected to be run in a workgroup whose size is the
greater of the number of input or output control points. It uses Metal's
support for vertex-style stage input to a compute shader to get the
input values; therefore, at least one instance must run per input point.
Meanwhile, Vulkan mandates that it run at least once per output point.
Overrunning the output array is a concern, but any values written should
either be discarded or overwritten by subsequent patches. I'm probably
going to put some slop space in the buffer when I integrate this into
MoltenVK to be on the safe side.
This is necessary to deal with indirect draws, where the draw parameters
are given in a buffer instead of passed by the CPU. For normal draws,
the draw parameters are set with Metal's `setVertexBytes:` method.
This undoes the change to add the vertex count to the aux buffer,
rendering that entire discussion largely moot. Oh well. It was a
discussion that needed to happen anyway.
`bitcast_glsl_op()` is sometimes called for `Boolean` types, e.g. for
specialization constants. We don't want the assert to trip if this is
going to be a no-op anyway.
Storage was in place already, so mostly just dealing with bitcasts and
constants.
Simplies some of the bitcasting logic, and this exposed some bugs in the
implementation. Refactor to use correct width integers with explicit bitcast opcodes.
Structs are aligned as you would expect in MSL (maximum member
alignment), and it is not minimum 16 bytes like in std140.
Also rename the dummy "pad" members to a reserved naming scheme.
Apparently we didn't use those yet. MSL seems to be able to alias struct
types and variable types to a degree, so that's why it has escaped
testing until now.
In the past, SPIRV-Cross threw an error in this case because it couldn't
work out which swizzle from the auxiliary buffer needs to be passed.
Now, we pass the swizzle around with the texture object, like a combined
image-sampler and its associated sampler.
If not enough components are provided in the shader,
the shader MSL compiler throws an error rather than make components
undefined. This hurts portability, so we need to add explicit padding
here.
MSL does not support value semantics for arrays (sigh), so we need to
force constant references and deal with copies if we have a different
address space than what we end up guessing.
A flat array was consuming way too much memory and was far too slow to
initialize properly with a very large ID bound (8 million IDs, showed up as #1 hotspot in perf).
Meta struct does not have to be in-order as we never iterate over it in
a meaningful way, so using a hashmap here is reasonable. Very few IDs
should need decorations or meta-data, so this should also be a quite
decent memory save.
For the pathological case, a 6x uplift was observed.
This is a fairly fundamental change on how IDs are handled.
It serves many purposes:
- Improve performance. We only need to iterate over IDs which are
relevant at any one time.
- Makes sure we iterate through IDs in SPIR-V module declaration order
rather than ID space. IDs don't have to be monotonically increasing,
which was an assumption SPIRV-Cross used to have. It has apparently
never been a problem until now.
- Support LUTs of structs. We do this by interleaving declaration of
constants and struct types in SPIR-V module order.
To support this, the ParsedIR interface needed to change slightly.
Before setting any ID with variant_set<T> we let ParsedIR know
that an ID with a specific type has been added. The surface for change
should be minimal.
ParsedIR will maintain a per-type list of IDs which the cross-compiler
will need to consider for later.
Instead of looping over ir.ids[] (which can be extremely large), we loop
over types now, using:
ir.for_each_typed_id<SPIRVariable>([&](uint32_t id, SPIRVariable &var) {
handle_variable(var);
});
Now we make sure that we're never looking at irrelevant types.
This allows shaders to declare and use pointer-type variables. Pointers
may be loaded and stored, be the result of an `OpSelect`, be passed to
and returned from functions, and even be passed as inputs to the `OpPhi`
instruction. All types of pointers may be used as variable pointers.
Variable pointers to storage buffers and workgroup memory may even be
loaded from and stored to, as though they were ordinary variables. In
addition, this enables using an interior pointer to an array as though
it were an array pointer itself using the `OpPtrAccessChain`
instruction.
This is a rather large and involved change, mostly because this is
somewhat complicated with a lot of moving parts. It's a wonder
SPIRV-Cross's output is largely unchanged. Indeed, many of these changes
are to accomplish exactly that! Perhaps the largest source of changes
was the violation of the assumption that, when emitting types, the
pointer type didn't matter.
One of the test cases added by the change doesn't optimize very well;
the output of `spirv-opt` here is invalid SPIR-V. I need to file a bug
with SPIRV-Tools about this.
I wanted to test that variable pointers to images worked too, but I
couldn't figure out how to propagate the access qualifier properly--in
MSL, it's part of the type, so getting this right is important. I've
punted on that for now.
This is required to avoid relying on complex sub-expression elimination
in compilers, and generates cleaner code.
The problem case is if a complex expression is used in an access chain,
like:
Composite comp = buffer[texture(...)];
vec4 a = comp.a + comp.b + comp.c;
Before, we did not have common subexpression tracking for
OpLoad/OpAccessChain, so we easily ended up with code like:
vec4 a = buffer[texture(...)].a + buffer[texture(...)].b + buffer[texture(...)].c;
A good compiler will optimize this, but we should not rely on it, and
forcing texture(...) to a temporary also looks better.
The solution is to add a vector "implied_expression_reads", which works
similarly to expression_dependencies. We also need an extra mechanism in
to_expression which lets us skip expression read checking and do it
later. E.g. for expr -> access chain -> load, we should only trigger
a read of expr when using the loaded expression.
Based on a patch by Stefan Dösinger.
Metal cannot do signedness conversion on vertex attributes, and for good
reason. Putting a `uint4` into an `int4`, or a `char4` into a `uint4`,
would lose those values that are outside the range of the target type.
But putting a `uchar4` into a `short4` or an `int4`, or a `ushort4` into
an `int4`, should work. In that case, force the signedness in the shader
to match the declared type of the host.
Unfortunately, I don't really know how to automatically test this. This
remapping is done based on input parameters normally supplied by
MoltenVK. I'm not sure how we'd set this up for the command-line
`spirv-cross` tool.
Don't use `addsat()`/`subsat()`; that'll erroneously flag cases where
the sum is exactly the maximum integer value, or the difference is
exactly 0. Also, correct the condition for the `select()` function; it's
basically `mix()` with a boolean factor.
(What was I *thinking*?)
In GLSL, 8-bit types require GL_EXT_shader_8bit_storage. 16-bit types
can use either GL_AMD_gpu_shader_int16/GL_AMD_gpu_shader_half_float or
GL_EXT_shader_16bit_storage.
When trying to validate buffer sizes, we usually need to bail out when
using SpecConstantOps, but for some very specific cases where we allow
unsized arrays currently, we can safely allow "unknown" sized arrays as
well.
This is probably the best we can do, when we have even more difficult
cases than this, we throw a more sensible error message.
This is a large refactor which splits out the SPIR-V parser from
Compiler and moves it into its more appropriately named Parser module.
The Parser is responsible for building a ParsedIR structure which is
then consumed by one or more compilers.
Compiler can take a ParsedIR by value or move reference. This should
allow for optimal case for both multiple compilations and single
compilation scenarios.
Even as of Metal 2.1, MSL still doesn't support arrays of buffers
directly. Therefore, we must manually expand them. In the prologue, we
define arrays holding the argument pointers; these arrays are what the
transpiled code ends up referencing. We might be able to do similar
things for textures and samplers prior to MSL 2.0.
Speaking of which, also enable texture arrays on iOS MSL 1.2.
It'll be useful to have an "auxiliary buffer" for other builtins--e.g.
`DrawIndex` (which should be easier to implement now), or `ViewIndex`
when someone gets around to implementing multiview.
Pass this buffer to leaf functions as well.
Test that we handle this for integer textures as well.
It's intended to be used with MoltenVK to support arbitrary
`VkComponentMapping` settings. The idea is that MoltenVK will pass a
buffer (which it set to some buffer index that isn't being used)
containing packed versions of the `VkComponentMapping` struct, one for
each sampled image.
Yes, this is horribly ugly. It is unfortunately necessary. Much of the
ugliness is to support swizzling gather operations, where we need to
alter the component that the gather operates on--something complicated
by the `gather()` method requiring the passed-in component to be a
constant expression. It doesn't even support swizzling gathers on depth
textures, though I could add that if it turns out we need it.
This requires MSL 2.0+.
Also, force `ViewportIndex` and `Layer` to be defined as the correct
type, which is always `uint` in MSL.
Since Metal doesn't yet have geometry shaders, the vertex shader (or
tessellation evaluation shader == "post-tessellation vertex shader" in
Metal jargon) is the only kind of shader that can set this output. This
currently requires an extension to Vulkan, which causes validation of
the SPIR-V binaries for the test cases to fail. Therefore, the test
cases are marked "invalid", even though they're actually perfectly valid
SPIR-V--they just won't work without the
`SPV_EXT_shader_viewport_index_layer` extension.
This is somewhat tricky, because in MSL this value is obtained through a
function, `get_sample_position()`. Since the call expression is an
rvalue, it can't be passed by reference, so functions get a copy
instead.
This was the last piece preventing us from turning on sample-rate
shading support in MoltenVK.
Implement this by flattening outputs and unflattening inputs explicitly.
This allows us to pass down a single struct instead of dealing with the
insanity that would be passing down each flattened member separately.
Remove stage_uniforms_var_id.
Seems to be dead code. Naked uniforms do not exist in SPIR-V for Vulkan,
which this seems to have been intended for. It was also unused elsewhere.
We were passing a constant '1' to `emit_atomic_func_op()`--which caused
us to refer to SPIR-V value `%1`, which is almost certainly not what we
want! What we really want is to add/subtract the literal constant '1'
to/from the memory location.
This only affects the builtin when it is used, and not when it's passed
to a function. It's a lot cleaner than the way I was doing it before.
Remove the `to_expression()` hack.
In SPIR-V, builtin integral vectors can be either signed or unsigned,
but in MSL they're always unsigned. Unfortunately, the MSL spec forbids
implicit conversions between vector types--even if the corresponding
scalar types would implicitly convert. If you try, the result is a
cryptic error message such as:
```
program_source:37:60: error: cannot convert between vector values of different size ('int4' (aka 'vector_int4') and 'vector_uint4' (vector of 4 'unsigned int' values))
float4 r3 = as_type<float4>((as_type<int4>(r0) * gl_LocalInvocationID.xyyy) + as_type<int4>(r2));
~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~
```
Therefore, uses of these builtins must be explicitly cast, since the
rest of the binary likely assumes that the builtin is of its declared
type.
Two varyings (vertex outputs/fragment inputs) might have the same
location but be in different components--e.g. the compiler may have
packed what were two different varyings into a single varying vector.
Giving both varyings the same `[[user]]` attribute won't work--it may
yield unexpected results, or flat out fail to link. We could eventually
pack such varyings into a single vector, but that would require us to
handle the case where the varyings are different types--e.g. a `float`
and a `uint` packed into the same vector. For now, it seems most
prudent to give them unique `[[user]]` locations and let Apple's
compiler work out the best way to pack them.
This roughly matches their semantics in SPIR-V and MSL. For `FMin`,
`FMax`, and `FClamp`, and the Metal functions `fast::min()`,
`fast::max()`, and `fast::clamp()`, the result is undefined if any
operand is NaN. For the 'N' operations and their corresponding MSL
`precise::` functions, the result is consistent with IEEE 754 (first
non-NaN wins; result is NaN if all operands are NaN).
We can only do this with 32-bit floats, though, because Metal only
provides these variants for `float`. `half` only has one variant of
these functions that is presumably consistent with IEEE 754. I guess
that's OK; the SPIR-V spec only says that `F{Min,Max,Clamp}` are
undefined for NaNs. Performance might suffer, though.
The SPIR-V spec says that these check if the operands either are
unordered or satisfy the given condition. So that's just what we'll do,
using Metal's `isunordered()` stdlib function. Apple's optimizers ought
to be able to collapse that to a single unordered compare.
When the name of an alias global variable collides with a global
declaration, MSL would emit inconsistent names, sometimes with the
naming fix, sometimes without, because names were being tracked in two
separate meta blocks. Fix this by always redirecting parameter naming to
the original base variable as necessary.
MSL would force thread const& which would not work if the input argument
came from a different storage class.
Emit proper non-reference arguments for such values.
Add CompilerMSL::Options::disable_rasterization input/output API flag.
Disable rasterization via API flag or when writing to textures.
Disable rasterization when shader declares no output.
Add test shaders for vertex no output and write texture forcing void output.
Some projects build SPIRV-Cross as a single translation unit
and this causes a lot of warnings because the same macro is redeclared
multiple times in the different backends. This make sure that each
backend has its own namespace for internal macros.
Add CompilerMSL::Options::texture_width_max.
Emit and use spvTexelBufferCoord() function to convert 1D
texel buffer coordinates to 2D Metal texture coordinates.
Support flattening StorageOutput & StorageInput matrices and arrays.
No longer move matrix & array inputs to separate buffer.
Add separate SPIRFunction::fixup_statements_in & SPIRFunction::fixup_statements_out
instead of just SPIRFunction::fixup_statements.
Emit SPIRFunction::fixup_statements at beginning of functions.
CompilerMSL track vars_needing_early_declaration.
Pass global output variables as variables to functions that access them.
Sort input structs by location, same as output structs.
Emit struct declarations in order output, input, uniforms.
Regenerate reference shaders to new formats defined by above.