Emit synthetic functions before function constants.
Support use of spvQuantizeToF16() in function constants for numerical
behavior consistency with the op code.
Ensure subnormal results from OpQuantizeToF16 are flushed to zero per SPIR-V spec.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Any MSL reference shader that inclues a synthetic function is affected,
since the location it is emitted has changed.
Add spvQuantizeToF16() family of synthetic functions to convert
from float to half and back again, and add function attribute
[[clang::optnone]] to honor infinities during conversions.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Add [[clang::optnone]] attribute to spvF*() functions used for handling
floating point operations decorated with DecorationNoContraction.
Just using precise::fma() did not work.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Based on CTS testing, math optimizations between MSL and Vulkan are inconsistent.
In some cases, enabling MSL's fast-math compilation option matches Vulkan's math
results. In other cases, disabling it does. Broadly enabling or disabling fast-math
across all shaders results in some CTS test failures either way.
To fix this, selectively enable/disable fast-math optimizations in the MSL code,
using metal::fast and metal::precise function namespaces, where supported, and
the [[clang::optnone]] function attribute otherwise.
Adjust SPIRV-Cross unit test reference shaders to accommodate these changes.
Add test shader for new functionality.
Add legacy test reference shader for unrelated buffer-bitcast
test, that doesn't seem to have been added previously.
When gl_Position is defined by SPIR-V, but neither used nor initialized,
it appeared twice in the MSL output, as gl_Position and glPosition_1.
The existing tests for whether an output is active check only that it is
used by an op, or initialized. Adding the implicit gl_Position also marked
the existing gl_Position as active, duplicating the output variable.
Fix is that when checking for the need to add an implicit gl_Position
output, also check if the var is already defined in the shader,
and just needs to be marked as active.
Add test shader.
Vulkan specifies that the Sample Mask Test occurs before fragment shading.
This means gl_SampleMaskIn should be influenced by both sample-shading and
VkPipelineMultisampleStateCreateInfo::pSampleMask.
CTS tests dEQP-VK.pipeline.multisample_shader_builtin.* bear this out.
For sample-shading, gl_SampleMaskIn should only have a single bit set,
Since Metal does not filter for this, apply a bitmask based on gl_SampleID.
For a fixed sample mask, since Metal is unaware of
VkPipelineMultisampleStateCreateInfo::pSampleMask, we need to ensure that
we apply it to both gl_SampleMaskIn and gl_SampleMask. This has the side
effect of a redundant application of pSampleMask if the shader already
includes gl_SampleMaskIn when setting gl_SampleMask, but I don't see an
easy way around this.
Also, simplify the logic for including the fixed sample mask in gl_ShaderMask,
and print the fixed sample mask as a hex value for readability of bits.
Emit block members directly in the IO structs and sort them.
Ensures we can get some kind of stable order between stages.
To complete the story, we'll need to be able to inject unused inputs /
builtins, or eliminate unused outputs (probably easiest solution).
We'll need to force a temporary and mark it as precise.
MSL is a little weird here, but we can piggyback on top of the invariant
float math option here to force fma() operations everywhere.
* Fix '--msl-multi-patch-workgroup' cases where thread count exceeds data bounds
*Fix gl_PrimitiveID off by one error when computing last valid index
*Point gl_out to the last patch's data when threads exceed input data bounds
*Point patchOut to the last patch's data when threads exceed input data bounds
* Update MSL test expectations.
* Undo change to MSL multi-patch hull output bound checks
* Update MSL multi-patch test expectations.