Emit synthetic functions before function constants.
Support use of spvQuantizeToF16() in function constants for numerical
behavior consistency with the op code.
Ensure subnormal results from OpQuantizeToF16 are flushed to zero per SPIR-V spec.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Any MSL reference shader that inclues a synthetic function is affected,
since the location it is emitted has changed.
Add spvQuantizeToF16() family of synthetic functions to convert
from float to half and back again, and add function attribute
[[clang::optnone]] to honor infinities during conversions.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Add [[clang::optnone]] attribute to spvF*() functions used for handling
floating point operations decorated with DecorationNoContraction.
Just using precise::fma() did not work.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Matching output/input struct member types between shader stages could fail if
a location is shared between members, each using different components of that
location, because the member vecsize was only stored once for the location.
Add MSLShaderInput::component member.
Use LocationComponentPair to key inputs_by_location, instead of just location.
ensure_correct_input_type() pass component value as well as location.
Based on CTS testing, math optimizations between MSL and Vulkan are inconsistent.
In some cases, enabling MSL's fast-math compilation option matches Vulkan's math
results. In other cases, disabling it does. Broadly enabling or disabling fast-math
across all shaders results in some CTS test failures either way.
To fix this, selectively enable/disable fast-math optimizations in the MSL code,
using metal::fast and metal::precise function namespaces, where supported, and
the [[clang::optnone]] function attribute otherwise.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Add test shader for new functionality.
Add legacy test reference shader for unrelated buffer-bitcast
test, that doesn't seem to have been added previously.
When gl_Position is defined by SPIR-V, but neither used nor initialized,
it appeared twice in the MSL output, as gl_Position and glPosition_1.
The existing tests for whether an output is active check only that it is
used by an op, or initialized. Adding the implicit gl_Position also marked
the existing gl_Position as active, duplicating the output variable.
Fix is that when checking for the need to add an implicit gl_Position
output, also check if the var is already defined in the shader,
and just needs to be marked as active.
Add test shader.
Previous casting in constant expressions used as_type<> between
types of different overall sizes.
Add check for overall size (width * vecsize) to ensure as_type<> will work,
otherwise use regular cast. Also beef up test of integer values to also check
vecsize, and use regular casts for those.
Vulkan specifies that the Sample Mask Test occurs before fragment shading.
This means gl_SampleMaskIn should be influenced by both sample-shading and
VkPipelineMultisampleStateCreateInfo::pSampleMask.
CTS tests dEQP-VK.pipeline.multisample_shader_builtin.* bear this out.
For sample-shading, gl_SampleMaskIn should only have a single bit set,
Since Metal does not filter for this, apply a bitmask based on gl_SampleID.
For a fixed sample mask, since Metal is unaware of
VkPipelineMultisampleStateCreateInfo::pSampleMask, we need to ensure that
we apply it to both gl_SampleMaskIn and gl_SampleMask. This has the side
effect of a redundant application of pSampleMask if the shader already
includes gl_SampleMaskIn when setting gl_SampleMask, but I don't see an
easy way around this.
Also, simplify the logic for including the fixed sample mask in gl_ShaderMask,
and print the fixed sample mask as a hex value for readability of bits.
When setting SPIRVCrossDecorationInterpolantComponentExpr decoration, take
into consideration a possible array of vectors in an access chain, and don't
apply the decoration until we traverse through the array into the vector itself.
We'll need to force a temporary and mark it as precise.
MSL is a little weird here, but we can piggyback on top of the invariant
float math option here to force fma() operations everywhere.
* Fix '--msl-multi-patch-workgroup' cases where thread count exceeds data bounds
*Fix gl_PrimitiveID off by one error when computing last valid index
*Point gl_out to the last patch's data when threads exceed input data bounds
*Point patchOut to the last patch's data when threads exceed input data bounds
* Update MSL test expectations.
* Undo change to MSL multi-patch hull output bound checks
* Update MSL multi-patch test expectations.
Firstly, never flatten inputs or outputs in multi-patch mode.
The main scenario where we do need to care is Block IO.
In this case, we should only flatten the top-level member, and after
that we use access chains as normal.
Using structs in Input storage class is now possible as well. We don't
need to consider per-location fixups at all here. In Vulkan, IO structs
must match exactly. Only plain vectors can have smaller vector sizes as
a special case.
For buffers, support all MSLResourceBinding::basetype pointers, not just void*.
Rename MSLResourceBinding::base_type to basetype for consistent use in other structs.
Add lookup from argument buffer argument index to resource binding for efficiency.
Fix error in advancing padding counts with combined image samplers.
Run clang-format.
If CompilerMSL::Options::pad_argument_buffer_resources enabled, Metal argument buffer
struct members are positionally aligned to their argument indexes by adding synthetic
padding members when needed. The types and sizes of these synthetic members are
identified in the resource_bindings vector provided through the API.
Add CompilerMSL::Options::pad_argument_buffer_resources to enable padding
Metal argument buffer structs to positionally match members to argument indexes.
Add MSLResourceBinding::base_type to identify resource type through API.
We only considered invalid names, and overwrote the alias for the
function. The correct fix is to replace illegal names early, do the
reserved fixup, then copy back alias to entry point name.
According to the Metal Shading Language Specification, it's not
supported for vertex functions in any Metal version, only fragment and
kernel functions.
This is necessary to avoid invalid output because of how implicit
dependencies on builtins work.
For example, the fixup for `BuiltInSubgroupEqMask` initializes the
variable based on `builtin_subgroup_invocation_id_id`, a field storing
the ID for a variable with decoration `BuiltInSubgroupLocalInvocationId`.
This could be either a variable that already exists in the input
(spirv_msl.cpp:300) or, if necessary, a newly created one
(spirv_msl.cpp:621). In both cases, though,
`builtin_subgroup_invocation_id_id` is only set under the condition
`need_subgroup_mask || needs_subgroup_invocation_id`.
`need_subgroup_mask` is true if any of the `BuiltInSubgroupXXMask` are
set in `active_input_builtins`.
Normally, if the program contains `BuiltInSubgroupEqMask`,
`Compiler::ActiveBuiltinHandler` will set it in `active_input_builtins`.
But this only happens if the variable is actually used, whereas
`fix_up_shader_inputs_outputs` loops over all variables in the program
regardless of whether they're used.
If `BuiltInSubgroupEqMask` is not used,
`builtin_subgroup_invocation_id_id` is never set, but before this patch
the fixup hook would try to use it anyway, producing MSL that references
a nonexistent variable named `_0`.
Avoid this by changing `fix_up_shader_inputs_outputs` to skip builtins
which are not set in `active_input_builtins` or
`active_output_builtins`. And add a test case.
In Metal, the `[[position]]` input to a fragment shader remains at
fragment center, even at sample rate, like OpenGL and Direct3D. In
Vulkan, however, when the fragment shader runs at sample rate, the
`FragCoord` builtin moves to the sample position in the framebuffer,
instead of the fragment center. To account for this difference, adjust
the `FragCoord`, if present, by the sample position. The -0.5 offset is
because the fragment center is at (0.5, 0.5).
Also, add an option to force sample-rate shading in a fragment shader.
Since Metal has no explicit control for this, this is done by adding a
dummy `[[sample_id]]` which is otherwise unused, if none is already
present. This is intended to be used from e.g. MoltenVK when a
pipeline's `minSampleShading` value is nonzero.
Instead of checking if any `Input` variables have `Sample`
interpolation, I've elected to check that the `SampleRateShading`
capability is present. Since `SampleId`, `SamplePosition`, and the
`Sample` interpolation decoration require this cap, this should be
equivalent for any valid SPIR-V module. If this isn't acceptable, let me
know.
We have been interchanging spv and SPIRV_Cross_ for a while, which
causes weirdness since we don't explicitly ban SPIRV_Cross identifiers,
as these identifiers are generally used for interface variable
workarounds.
Add support for declaring a fixed subgroup size. Metal, like Vulkan with
`VK_EXT_subgroup_size_control`, allows the thread execution width to
vary depending on factors such as register usage. Unfortunately, this
breaks several tests that depend on the subgroup size being what the
device says it is. So we'll fix the subgroup size at the size the device
declares. The extra invocations in the subgroup will appear to be
inactive. Because of this, the ballot mask builtins are now ANDed with
the active subgroup mask.
Add support for emulating a subgroup of size 1. This is intended to be
used by Vulkan Portability implementations (e.g. MoltenVK) when the
hardware/software combo provides insufficient support for subgroups.
Luckily for us, Vulkan 1.1 only requires that the subgroup size be at
least 1.
Add support for quadgroup and SIMD-group functions which were added to
iOS in Metal 2.2 and 2.3. This will allow clients to take advantage of
expanded quadgroup and SIMD-group support in recent Metal versions and
on recent Apple GPUs (families 6 and 7).
Gut emulation of subgroup builtins in fragment shaders. It turns out
codegen for the SIMD-group functions in fragment wasn't implemented for
AMD on Mojave; it's a safe bet that it wasn't implemented for the other
drivers either. Subgroup support in fragment shaders now requires Metal
2.2.
`min_lod_clamp()` was actually added in MSL 2.2 on iOS 13. The
restriction was based on the beta versions which didn't have it. Since
the beta versions didn't support family 6, this leads me to suspect that
the reason they lacked `min_lod_clamp()` is that it requires family 6.
This does not seem to be documented anywhere.
`simd_is_helper_thread()` was added in MSL 2.3 to iOS. I neglected to
update this when I finished up `SPV_EXT_demote_to_helper_invocation`.
`barycentric_coord` and `primitive_id` were added in MSL 2.3 on iOS 14.
They are only supported on family 7.
New in MSL 2.3 is a template that can be used in the place of a scalar
type in a stage-in struct. This template has methods which interpolate
the varying at the given points. Curiously, you can't set interpolation
attributes on such a varying; perspective-correctness is encoded in the
type, while interpolation must be done using one of the methods. This
makes using this somewhat awkward from SPIRV-Cross, requiring us to jump
through a bunch of hoops to make this all work.
Using varyings from functions in particular is a pain point, requiring
us to pass the stage-in struct itself around. An alternative is to pass
references to the interpolants; except this will fall over badly with
composite types, which naturally must be flattened. As with
tessellation, dynamic indexing isn't supported with pull-model
interpolation. This is because of the need to reference the original
struct member in order to call one of the pull-model interpolation
methods on it. Also, this is done at the variable level; this means that
if one varying in a struct is used with the pull-model functions, then
the entire struct is emitted as pull-model interpolants.
For some reason, this was not documented in the MSL spec, though there
is a property on `MTLDevice`, `supportsPullModelInterpolation`,
indicating support for this, which *is* documented. This does not appear
to be implemented yet for AMD: it returns `NO` from
`supportsPullModelInterpolation`, and pipelines with shaders using the
templates fail to compile. It *is* implemeted for Intel. It's probably
also implemented for Apple GPUs: on Apple Silicon, OpenGL calls down to
Metal, and it wouldn't be possible to use the interpolation functions
without this implemented in Metal.
Based on my testing, where SPIR-V and GLSL have the offset relative to
the pixel center, in Metal it appears to be relative to the pixel's
upper-left corner, as in HLSL. Therefore, I've added an offset 0.4375,
i.e. one half minus one sixteenth, to all arguments to
`interpolate_at_offset()`.
This also fixes a long-standing bug: if a pull-model interpolation
function is used on a varying, make sure that varying is declared. We
were already doing this only for the AMD pull-model function,
`interpolateAtVertexAMD()`; for reasons which are completely beyond me,
we weren't doing this for the base interpolation functions. I also note
that there are no tests for the interpolation functions for GLSL or
HLSL.
I kept the code to replace constant zero arguments, because `Bias` and
`Grad` still have some problems on desktop GPUs.
`Bias` works on AMD GPUs. `Grad` does not. Both work on Intel. Still
needs testing on NV. It will definitely work with Apple GPUs.
(GL_EXT_nonuniform_qualifier/SPV_EXT_descriptor_indexing).
MSLResourceBinding includes array size through API, and substitutes
in that size if the image or sampler array is not explicitly sized.
OpCopyObject supports SPIRCombinedImageSampler type in MSL.
Writing to buffers actually works starting in MSL 2.1 (macOS 10.14,
iOS 12). Writing to textures works starting in MSL 2.2 (macOS 10.15, iOS
13).
No tests unfortunately, because the MSL 2.2 compiler and above produce a
warning that cannot be disabled, because it has no associated option.
Metal doesn't support broadcasting or shuffling boolean values, but we
can work around that by casting it to `ushort`, then casting it back to
`bool`. I used `ushort` instead of `uint` because 16-bit values give
better throughput on Apple GPUs.
Only the least *n* bits are significant, where *n* is the subgroup size.
The Vulkan CTS actually checks this.
The `FindLSB` tests weren't actually failing, but I masked that anyway,
in case there's some corner case the CTS is missing.
`SubgroupEqMask` had a fencepost error that gave wrong values for
invocation ID 32.
For `SubgroupGeMask` and `SubgroupGtMask`, I forgot to shift the values
from `extract_bits()` up so that the mask is in the correct position.
Using `insert_bits()` instead should fold these two operations into one.
`SubgroupLtMask` and `SubgroupLeMask` were already correct.
`half` cannot be bitcasted to `float`, because the two types are not the
same size. Use an expanding cast instead.
We were already doing this for stores to the tessellation levels; why I
didn't also do this for loads is beyond me.
Fix reversed coordinates: `y` should be used to calculate the row
address. Align row address to the row stride.
I've made the row alignment a function constant; this makes it possible
to override it at pipeline compile time.
Honestly, I don't know how this worked at all for Epic. It definitely
didn't work in the CTS prior to this.
MSL 2.3 has everything needed to support this extension on all
platforms. The existing `discard_fragment()` function was given demote
semantics, similar to Direct3D, and the `simd_is_helper_thread()`
function was finally added to iOS.
I've left the old test alone. Should I remove it in favor of these?
In some cases, we need to get a literal value from a spec constant op.
Mostly relevant when emitting buffers, so implement a 32-bit integer
scalar subset of the evaluator. Can be extended as needed to support
evaluating any specialization constant operation.