Some fallout where internal functions are using stronger types.
Overkill to move everything over to strong types right now, but perhaps
move over to it slowly over time.
Vulkan has two types of buffer descriptors,
`VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC` and
`VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC`, which allow the client to
offset the buffers by an amount given when the descriptor set is bound
to a pipeline. Metal provides no direct support for this when the buffer
in question is in an argument buffer, so once again we're on our own.
These offsets cannot be stored or associated in any way with the
argument buffer itself, because they are set at bind time. Different
pipelines may have different offsets set. Therefore, we must use a
separate buffer, not in any argument buffer, to hold these offsets. Then
the shader must manually offset the buffer pointer.
This change fully supports arrays, including arrays of arrays, even
though Vulkan forbids them. It does not, however, support runtime
arrays. Perhaps later.
Writable textures cannot use argument buffers on iOS. They must be
passed as arguments directly to the shader function. Since we won't know
if a given storage image will have the `NonWritable` decoration at the
time we encode the argument buffer, we must therefore pass all storage
images as discrete arguments. Previously, we were throwing an error if
we encountered an argument buffer with a writable texture in it on iOS.
This change introduces functions and in one case, a class, to support
the `VK_KHR_sampler_ycbcr_conversion` extension. Except in the case of
GBGR8 and BGRG8 formats, for which Metal natively supports implicit
chroma reconstruction, we're on our own here. We have to do everything
ourselves. Much of the complexity comes from the need to support
multiple planes, which must now be passed to functions that use the
corresponding combined image-samplers. The rest is from the actual
Y'CbCr conversion itself, which requires additional post-processing of
the sample retrieved from the image.
Passing sampled images to a function was a particular problem. To
support this, I've added a new class which is emitted to MSL shaders
that pass sampled images with Y'CbCr conversions attached around. It
can handle sampled images with or without Y'CbCr conversion. This is an
awful abomination that should not exist, but I'm worried that there's
some shader out there which does this. This support requires Metal 2.0
to work properly, because it uses default-constructed texture objects,
which were only added in MSL 2. I'm not even going to get into arrays of
combined image-samplers--that's a whole other can of worms. They are
deliberately unsupported in this change.
I've taken the liberty of refactoring the support for texture swizzling
while I'm at it. It's now treated as a post-processing step similar to
Y'CbCr conversion. I'd like to think this is cleaner than having
everything in `to_function_name()`/`to_function_args()`. It still looks
really hairy, though. I did, however, get rid of the explicit type
arguments to `spvGatherSwizzle()`/`spvGatherCompareSwizzle()`.
Update the C API. In addition to supporting this new functionality, add
some compiler options that I added in previous changes, but for which I
neglected to update the C API.
These methods have largely the same logic, with minor differences. That
I felt compelled to duplicate the logic into another method was one of
the things that bothered me about the variable pointers change. This
cleans that part of the code up; now we don't have two places to change.
This command allows the caller to set the base value of
`BuiltInWorkgroupId`, and thus of `BuiltInGlobalInvocationId`. Metal
provides no direct support for this... but it does provide a builtin,
`[[grid_origin]]`, normally used to pass the base values for the stage
input region, which we will now abuse to pass the dispatch base and
avoid burning a buffer binding.
`[[grid_origin]]`, as part of Metal's support for compute stage input,
requires MSL 1.2. For 1.0 and 1.1, we're forced to provide a buffer.
(Curiously, this builtin was undocumented until the MSL 2.2 release. Go
figure.)
Make sure to test everything with scalar as well to catch any weird edge
cases.
Not all opcodes are covered here, just the arithmetic ones. FP64 packing
is also ignored.
The only piece added by this extension is the `DeviceIndex` builtin,
which tells the shader which device in a grouped logical device it is
running on.
Metal's pipeline state objects are owned by the `MTLDevice` that created
them. Since Metal doesn't support logical grouping of devices the way
Vulkan does, we'll thus have to create a pipeline state for each device
in a grouped logical device. The upcoming peer group support in Metal 3
will not change this. For this reason, for Metal, the device index is
supplied as a constant at pipeline compile time.
There's an interaction between `VK_KHR_device_group` and
`VK_KHR_multiview` in the
`VK_PIPELINE_CREATE_VIEW_INDEX_FROM_DEVICE_INDEX_BIT`, which defines the
view index to be the same as the device index. The new
`view_index_from_device_index` MSL option supports this functionality.
This maps them to their MSL equivalents. I've mapped `Coherent` to
`volatile` since MSL doesn't have anything weaker than `volatile` but
stronger than nothing.
As part of this, I had to remove the implicit `volatile` added for
atomic operation casts. If the buffer is already `coherent` or
`volatile`, then we would add a second `volatile`, which would be
redundant. I think this is OK even when the buffer *doesn't* have
`coherent`: `T *` is implicitly convertible to `volatile T *`, but not
vice-versa. It seems to compile OK at any rate. (Note that the
non-`volatile` overloads of the atomic functions documented in the spec
aren't present in the MSL 2.2 stdlib headers.)
`restrict` is tricky, because in MSL, as in C++, it needs to go *after*
the asterisk or ampersand for the pointer type it's modifying.
Another issue is that, in the `Simple`, `GLSL450`, and `Vulkan` memory
models, `Restrict` is the default (i.e. does not need to be specified);
but MSL likely follows the `OpenCL` model where `Aliased` is the
default. We probably need to implicitly set either `Restrict` or
`Aliased` depending on the module's declared memory model.
The old method of using a different unpacked matrix type doesn't work
for scalar alignment. It certainly wouldn't have any effect for a square
matrix, since the number of columns and rows are the same. So now we'll
store them as arrays of packed vectors.
Relaxed block layout relaxed the restrictions on vector alignment,
allowing them to be aligned on scalar boundaries. Scalar block layout
relaxes this further, allowing *any* member to be aligned on a scalar
boundary. The requirement that a vector not improperly straddle a
16-byte boundary is also relaxed.
I've also added a test showing that `std430` layout works with UBOs.
I'm troubled by the dual meaning of the `Packed` extended decoration. In
some instances (struct, `float[]`, and `vec2[]` members), it actually
means the exact opposite, that the member needs extra padding. This is
especially problematic for `vec2[]`, because now we need to distinguish
the two cases by checking the array stride. I wonder if this should
actually be split into two decorations.
This is needed to support `VK_KHR_multiview`, which is in turn needed
for Vulkan 1.1 support. Unfortunately, Metal provides no native support
for this, and Apple is once again less than forthcoming, so we have to
implement it all ourselves.
Tessellation and geometry shaders are deliberately unsupported for now.
The problem is that the current implementation encodes the `ViewIndex`
as part of the `InstanceIndex`, which in the SPIR-V environment at least
only exists in the vertex shader. So we need to work out a way to pass
the view index along to the later stages.
This implementation runs vertex shaders for all views up to the highest
bit set in the view mask, even those whose bits are clear. The fragments
for the inactive views are then discarded. Avoiding this is difficult:
calculating the view indices becomes far more complicated if we can only
run for those views which are set in the mask.
We used to use the Binding decoration for this, but this method is
hopelessly broken. If no explicit MSL resource remapping exists, we
remap automatically in a manner which should always "just work".
Older API was oriented around IDs which are not available unless you're
doing full reflection, which is awkward for certain use cases which know
their set/bindings up front.
Optimize resource bindings to be hashmap rather than doing linear seeks
all the time.
In multiple-entry-point modules, we declared builtin inputs which were
not supposed to be used for that entry point.
Fix this, by being more strict when checking which builtins to emit.
This gets rather complicated because MSL does not support OpArrayLength
natively. We need to pass down a buffer which contains buffer sizes, and
we compute the array length on-demand.
Support both discrete descriptors as well as argument buffers.
Change aux buffer to swizzle buffer.
There is no good reason to expand the aux buffer, so name it
appropriately.
Make the code cleaner by emitting a straight pointer to uint rather than
a dummy struct which only contains a single unsized array member anyways.
This will also end up being very similar to how we implement swizzle
buffers for argument buffers.
Do not use implied binding if it overflows int32_t.
Some support for subgroups is present starting in Metal 2.0 on both iOS
and macOS. macOS gains more complete support in 10.14 (Metal 2.1).
Some restrictions are present. On iOS and on macOS 10.13, the
implementation of `OpGroupNonUniformElect` is incorrect: if thread 0 has
already terminated or is not executing a conditional branch, the first
thread that *is* will falsely believe itself not to be. Unfortunately,
this operation is part of the "basic" feature set; without it, subgroups
cannot be supported at all.
The `SubgroupSize` and `SubgroupLocalInvocationId` builtins are only
available in compute shaders (and, by extension, tessellation control
shaders), despite SPIR-V making them available in all stages. This
limits the usefulness of some of the subgroup operations in fragment
shaders.
Although Metal on macOS supports some clustered, inclusive, and
exclusive operations, it does not support them all. In particular,
inclusive and exclusive min, max, and, or, and xor; as well as cluster
sizes other than 4 are not supported. If this becomes a problem, they
could be emulated, but at a significant performance cost due to the need
for non-uniform operations.
Atomics are not supported on images or texture_buffers in MSL.
Properly throw an error if OpImageTexelPointer is used (since it can
only be used for atomic operations anyways).
- Replace ostringstream with custom implementation.
~30% performance uplift on vector-shuffle-oom test.
Allocations are measurably reduced in Valgrind.
- Replace std::vector with SmallVector.
Classic malloc optimization, small vectors are backed by inline data.
~ 7-8% gain on vector-shuffle-oom on GCC 8 on Linux.
- Use an object pool for IVariant type.
We generally allocate a lot of SPIR* objects. We can amortize these
allocations neatly by pooling them.
- ~15% overall uplift on ./test_shaders.py --iterations 10000 shaders/.
We cannot deduce if OpLoad needs ArrayCopy templates early since it's
heavily context dependent, and we might only know on 3rd iteration of
the compile loop.
This is a pragmatic trick to avoid symbol collision where a project
links against SPIRV-Cross statically, while linking to other projects
which also use SPIRV-Cross statically. We can end up with very awkward
symbol collisions which can resolve themselves silently because
SPIRV-Cross is pulled in as necessary. To fix this, we must use
different symbols and embed two copies of SPIRV-Cross in this scenario,
now with different namespaces, which in turn leads to different symbols.
This adds a new C API for SPIRV-Cross which is intended to be stable,
both API and ABI wise.
The C++ API has been refactored a bit to make the C wrapper easier and
cleaner to write. Especially the vertex attribute / resource interfaces
for MSL has been rewritten to avoid taking mutable pointers into the
interface. This would be very annoying to wrap and it didn't fit well
with the rest of the C++ API to begin with. While doing this, I went
ahead and removed all the old deprecated interfaces.
The CMake build system has also seen an overhaul.
It is now possible to build static/shared/CLI separately with -D
options.
The shared library only exposes the C API, as it is the only ABI-stable
API. pkg-configs as well as CMake modules are exported and installed for
the shared library configuration.
The tessellation levels in Metal are stored as a densely-packed array of
half-precision floating point values. But, stage-in attributes in Metal
have to have offsets and strides aligned to a multiple of four, so we
can't add them individually. Luckily for us, the arrays have lengths
less than 4. So, let's use vectors for them!
Triangles get a single attribute with a `float4`, where the outer levels
are in `.xyz` and the inner levels are in `.w`. The arrays are unpacked
as though we had added the elements individually. Quads get two: a
`float4` with the outer levels and a `float2` with the inner levels.
Further, since vectors can be indexed as arrays, there's no need to
unpack them in this case.
This also saves on precious vertex attributes. Before, we were using up
to 6 of them. Now we need two at most.
Builtin attributes in SPIR-V aren't linked by location, but by their
built-in-ness. This poses a problem for MSL, since builtin inputs in
the vertex pipeline are just regular attributes. We must then assign
them locations so that they can be matched up to the attributes in the
stage input descriptor--and also to avoid duplicate attribute numbers in
tessellation evaluation shaders, where there are two different
stage-in structs, so the member index therein is no longer unique!
In SPIR-V, there are always two inner levels and four outer levels, even
if the input patch isn't a quad patch. But in MSL, due to requirements
imposed by Metal, only one inner level and three outer levels exist when
the input patch is a triangle patch. We must explicitly ignore any write
to the nonexistent second inner and fourth outer levels in this case.
This is intended to be used to support `VK_KHR_maintenance2`'s
tessellation domain origin feature. If `tess_domain_origin_lower_left`
is `true`, the `v` coordinate will be inverted with respect to the
domain. Additionally, in `Triangles` mode, the `v` and `w` coordinates
will be swapped. This is because the winding order is interpreted
differently in lower-left mode.
These are mapped to Metal's post-tessellation vertex functions. The
semantic difference is much less here, so this change should be simpler
than the previous one. There are still some hairy parts, though.
In MSL, the array of control point data is represented by a special
type, `patch_control_point<T>`, where `T` is a valid stage-input type.
This object must be embedded inside the patch-level stage input. For
this reason, I've added a new type to the type system to represent this.
On Mac, the number of input control points to the function must be
specified in the `patch()` attribute. This is optional on iOS.
SPIRV-Cross takes this from the `OutputVertices` execution mode; the
intent is that if it's not set in the shader itself, MoltenVK will set
it from the tessellation control shader. If you're translating these
offline, you'll have to update the control point count manually, since
this number must match the number that is passed to the
`drawPatches:...` family of methods.
Fixes#120.
These are transpiled to kernel functions that write the output of the
shader to three buffers: one for per-vertex varyings, one for per-patch
varyings, and one for the tessellation levels. This structure is
mandated by the way Metal works, where the tessellation factors are
supplied to the draw method in their own buffer, while the per-patch and
per-vertex varyings are supplied as though they were vertex attributes;
since they have different step rates, they must be in separate buffers.
The kernel is expected to be run in a workgroup whose size is the
greater of the number of input or output control points. It uses Metal's
support for vertex-style stage input to a compute shader to get the
input values; therefore, at least one instance must run per input point.
Meanwhile, Vulkan mandates that it run at least once per output point.
Overrunning the output array is a concern, but any values written should
either be discarded or overwritten by subsequent patches. I'm probably
going to put some slop space in the buffer when I integrate this into
MoltenVK to be on the safe side.
This is necessary to deal with indirect draws, where the draw parameters
are given in a buffer instead of passed by the CPU. For normal draws,
the draw parameters are set with Metal's `setVertexBytes:` method.
This undoes the change to add the vertex count to the aux buffer,
rendering that entire discussion largely moot. Oh well. It was a
discussion that needed to happen anyway.
In the past, SPIRV-Cross threw an error in this case because it couldn't
work out which swizzle from the auxiliary buffer needs to be passed.
Now, we pass the swizzle around with the texture object, like a combined
image-sampler and its associated sampler.
If not enough components are provided in the shader,
the shader MSL compiler throws an error rather than make components
undefined. This hurts portability, so we need to add explicit padding
here.
This is a fairly fundamental change on how IDs are handled.
It serves many purposes:
- Improve performance. We only need to iterate over IDs which are
relevant at any one time.
- Makes sure we iterate through IDs in SPIR-V module declaration order
rather than ID space. IDs don't have to be monotonically increasing,
which was an assumption SPIRV-Cross used to have. It has apparently
never been a problem until now.
- Support LUTs of structs. We do this by interleaving declaration of
constants and struct types in SPIR-V module order.
To support this, the ParsedIR interface needed to change slightly.
Before setting any ID with variant_set<T> we let ParsedIR know
that an ID with a specific type has been added. The surface for change
should be minimal.
ParsedIR will maintain a per-type list of IDs which the cross-compiler
will need to consider for later.
Instead of looping over ir.ids[] (which can be extremely large), we loop
over types now, using:
ir.for_each_typed_id<SPIRVariable>([&](uint32_t id, SPIRVariable &var) {
handle_variable(var);
});
Now we make sure that we're never looking at irrelevant types.
This allows shaders to declare and use pointer-type variables. Pointers
may be loaded and stored, be the result of an `OpSelect`, be passed to
and returned from functions, and even be passed as inputs to the `OpPhi`
instruction. All types of pointers may be used as variable pointers.
Variable pointers to storage buffers and workgroup memory may even be
loaded from and stored to, as though they were ordinary variables. In
addition, this enables using an interior pointer to an array as though
it were an array pointer itself using the `OpPtrAccessChain`
instruction.
This is a rather large and involved change, mostly because this is
somewhat complicated with a lot of moving parts. It's a wonder
SPIRV-Cross's output is largely unchanged. Indeed, many of these changes
are to accomplish exactly that! Perhaps the largest source of changes
was the violation of the assumption that, when emitting types, the
pointer type didn't matter.
One of the test cases added by the change doesn't optimize very well;
the output of `spirv-opt` here is invalid SPIR-V. I need to file a bug
with SPIRV-Tools about this.
I wanted to test that variable pointers to images worked too, but I
couldn't figure out how to propagate the access qualifier properly--in
MSL, it's part of the type, so getting this right is important. I've
punted on that for now.
Based on a patch by Stefan Dösinger.
Metal cannot do signedness conversion on vertex attributes, and for good
reason. Putting a `uint4` into an `int4`, or a `char4` into a `uint4`,
would lose those values that are outside the range of the target type.
But putting a `uchar4` into a `short4` or an `int4`, or a `ushort4` into
an `int4`, should work. In that case, force the signedness in the shader
to match the declared type of the host.
Unfortunately, I don't really know how to automatically test this. This
remapping is done based on input parameters normally supplied by
MoltenVK. I'm not sure how we'd set this up for the command-line
`spirv-cross` tool.
When trying to validate buffer sizes, we usually need to bail out when
using SpecConstantOps, but for some very specific cases where we allow
unsized arrays currently, we can safely allow "unknown" sized arrays as
well.
This is probably the best we can do, when we have even more difficult
cases than this, we throw a more sensible error message.
This is a large refactor which splits out the SPIR-V parser from
Compiler and moves it into its more appropriately named Parser module.
The Parser is responsible for building a ParsedIR structure which is
then consumed by one or more compilers.
Compiler can take a ParsedIR by value or move reference. This should
allow for optimal case for both multiple compilations and single
compilation scenarios.
Even as of Metal 2.1, MSL still doesn't support arrays of buffers
directly. Therefore, we must manually expand them. In the prologue, we
define arrays holding the argument pointers; these arrays are what the
transpiled code ends up referencing. We might be able to do similar
things for textures and samplers prior to MSL 2.0.
Speaking of which, also enable texture arrays on iOS MSL 1.2.
It'll be useful to have an "auxiliary buffer" for other builtins--e.g.
`DrawIndex` (which should be easier to implement now), or `ViewIndex`
when someone gets around to implementing multiview.
Pass this buffer to leaf functions as well.
Test that we handle this for integer textures as well.
It's intended to be used with MoltenVK to support arbitrary
`VkComponentMapping` settings. The idea is that MoltenVK will pass a
buffer (which it set to some buffer index that isn't being used)
containing packed versions of the `VkComponentMapping` struct, one for
each sampled image.
Yes, this is horribly ugly. It is unfortunately necessary. Much of the
ugliness is to support swizzling gather operations, where we need to
alter the component that the gather operates on--something complicated
by the `gather()` method requiring the passed-in component to be a
constant expression. It doesn't even support swizzling gathers on depth
textures, though I could add that if it turns out we need it.
This is somewhat tricky, because in MSL this value is obtained through a
function, `get_sample_position()`. Since the call expression is an
rvalue, it can't be passed by reference, so functions get a copy
instead.
This was the last piece preventing us from turning on sample-rate
shading support in MoltenVK.
Implement this by flattening outputs and unflattening inputs explicitly.
This allows us to pass down a single struct instead of dealing with the
insanity that would be passing down each flattened member separately.
Remove stage_uniforms_var_id.
Seems to be dead code. Naked uniforms do not exist in SPIR-V for Vulkan,
which this seems to have been intended for. It was also unused elsewhere.
We were passing a constant '1' to `emit_atomic_func_op()`--which caused
us to refer to SPIR-V value `%1`, which is almost certainly not what we
want! What we really want is to add/subtract the literal constant '1'
to/from the memory location.
Two varyings (vertex outputs/fragment inputs) might have the same
location but be in different components--e.g. the compiler may have
packed what were two different varyings into a single varying vector.
Giving both varyings the same `[[user]]` attribute won't work--it may
yield unexpected results, or flat out fail to link. We could eventually
pack such varyings into a single vector, but that would require us to
handle the case where the varyings are different types--e.g. a `float`
and a `uint` packed into the same vector. For now, it seems most
prudent to give them unique `[[user]]` locations and let Apple's
compiler work out the best way to pack them.
The SPIR-V spec says that these check if the operands either are
unordered or satisfy the given condition. So that's just what we'll do,
using Metal's `isunordered()` stdlib function. Apple's optimizers ought
to be able to collapse that to a single unordered compare.
Add CompilerMSL::Options::disable_rasterization input/output API flag.
Disable rasterization via API flag or when writing to textures.
Disable rasterization when shader declares no output.
Add test shaders for vertex no output and write texture forcing void output.
Add CompilerMSL::Options::texture_width_max.
Emit and use spvTexelBufferCoord() function to convert 1D
texel buffer coordinates to 2D Metal texture coordinates.
Support flattening StorageOutput & StorageInput matrices and arrays.
No longer move matrix & array inputs to separate buffer.
Add separate SPIRFunction::fixup_statements_in & SPIRFunction::fixup_statements_out
instead of just SPIRFunction::fixup_statements.
Emit SPIRFunction::fixup_statements at beginning of functions.
CompilerMSL track vars_needing_early_declaration.
Pass global output variables as variables to functions that access them.
Sort input structs by location, same as output structs.
Emit struct declarations in order output, input, uniforms.
Regenerate reference shaders to new formats defined by above.
Replace with common/hlsl/msl instead. The old interface had some bad
interaction with overloading which meant you had to up-cast to base
class to be able to use set_options, which was awkward.
Support MSL typedefs to declare 3-row row-major matrices as 3-column matrices.
Allow those matrices to be decorated as packed.
Support transposing those matrices when used.
Modify how member alignments are calculated.
Support Workgroup (threadgroup) variables.
Mark if SPIRConstant is used as an array length, since it cannot be specialized.
Resolve specialized array length constants.
Support passing an array to MSL function.
Support emitting GLSL array assignments in MSL via an array copy function.
Support for memory and control barriers.
Struct packing enhancements, including packing nested structs.
Enhancements to replacing illegal MSL variable and function names.
Add Compiler::get_entry_point_name_map() function to retrieve entry point renamings.
Remove CompilerGLSL::clean_func_name() as obsolete.
Fixes to types in bitcast MSL functions.
Add Variant::get_id() member function.
Add CompilerMSL::Options::msl_version option.
Add numerous MSL compute tests.
Emit input struct assignment by assigning member by member from stage_in struct.
Map qualified member name from pointer type, not base type.
Add Comiler::expression_type_id() function, similar to expression_type().
Support BuiltInFragDepth.
Emit interface block for StorageClassUniformConstant.
Throw exception when output or fragment input structs contain matrix or array.
Dynamically created interface structs sorted by location number instead of alphabetically.
Add Compiler::is_array() function.
CompilerMSL add emit_custom_functions() function.
CompilerMSL restrict use of as_type<> cast to necessary conditions.
CompilerMSL refactor get_declared_struct_member_size() and
get_declared_struct_member_alignment() functions, and remove
unnecessary get_declared_type_size() functions.
Add test shaders-msl/vulkan/frag/spec-constant.vk.frag.
CompilerGLSL type_to_glsl() and image_type_glsl() functions support optional object ID.
Add SPIRType::Image::access member to support SPIR-V OpTypeImage access qualifier.
Remove SPIRType::Image::is_read and ::is_written members.
Use DecorationNonReadable and DecorationNonWritable to mark read/write access for image variables.
CompilerMSL emit access qualifiers per image variable, instead of per image type.
CompilerGLSL and CompilerHLSL behaviour is unchanged.
Add bool members is_read and is_written to SPIRType::Image.
Output correct texture read/write access by marking whether textures
are read from and written to by the shader.
Override bitcast_glsl_op() to use Metal as_type<type> functions.
Add implementations of SPIR-V functions inverse(), degrees() & radians().
Map inverseSqrt() to rsqrt().
Map roundEven() to rint().
GLSL functions imageSize() and textureSize() map to equivalent
expression using MSL get_width() & get_height() functions.
Map several SPIR-V integer bitfield functions to MSL equivalents.
Map SPIR-V atomic functions to MSL equivalents.
Map texture packing and unpacking functions to MSL equivalents.
Refactor existing, and add new, image query functions.
Reorganize header lines into includes and pragmas.
Simplify type_to_glsl() logic.
Add MSL test case vert/functions.vert for added function implementations.
Add MSL test case comp/atomic.comp for added function implementations.
test_shaders.py use macOS compilation for MSL shader compilation validations.
WebGL supports lod texture funcs only in fragment
shaders but SPIR-V supports only lod texture funcs
in vertex shaders. This reverts calls which were
forced (infered from using a 0 constant) to use
an lod to plain calls in vertex shaders when
using legacy es.
Remove unnecessary use of std:: prefix in spirv_msl.cpp.
Use typedef instead of #define.
spirv-cross deprecate --metal CLI option and replace with --msl option.
CompilerMSL accesses options using same design pattern as CompilerGLSL and CompilerHLSL.
CompilerMSL support setting VA & rez binding specs via either constructor or compile() method overload.
CompilerMSL support single UBO packing and padding in single pass.
spriv_cross app (main.cpp) supports turning off UBO packing and padding via command line option.
Add MSL UBO alignment test shader.
spirv_msl optionally add padding and packing to allow MSL
struct members to align with SPIR-V struct alignments.
spirv_cross add convenience methods for testing Decorations.
spirv_glsl replace member_decl() function with new emit_stuct_member().
Allow struct member types to be marked as packed via DecorationCPacked decoration.
Do not emit function prototypes. If secondary functions are used,
suppress compiler -Wmissing-prototypes warnings.
Refactor Compiler::emit_function_prototype() functions to simplify.
Rename Compiler_msl::CustomFunctionHandler to OpCodePreprocessor, and
Compiler_msl::register_custom_functions() to preprocess_op_codes()
to perform more generic preprocessing.
Add space between dynamic header lines and fixed header lines.