Similar concern as access chains. Objects that we cannot lower to
temporaries must implicitly access all expression dependencies when they
are themselves accessed.
Using vertex-style stage input is complex, and it doesn't support
nesting of structures or arrays. By using raw buffer input instead, we
get this support "for free," and everything becomes much simpler.
Arguably, this is the way I should've done this in the first place.
Eventually, I'd like to make this the default, and then remove the
option altogether. (And I still need to do that with
`multi_patch_workgroup`...)
Should help fix 66 tests in the Vulkan CTS, under the following trees:
- `dEQP-VK.pipeline.*.interface_matching.*`
- `dEQP-VK.tessellation.user_defined_io.*`
- `dEQP-VK.clipping.user_defined.*`
- Add CompilerMSL::emit_binary_ptr_op() and to_ptr_expression()
to emit binary pointer op. Compare matrix addresses without automatic
transpose() conversion, to avoid error taking address of temporary copy.
- Add Compiler::add_active_interface_variable() to also track active
interface vars in the entry point for SPIR-V 1.4 and above.
- For OpPtrAccessChain that ends in array element, use Element
as offset to existing index, otherwise it will access into
array dimension that doesn't exist.
- Dereference pointer function call arguments. Ultimately, this
dereferencing is actually backwards, and in future, we should aim
to properly support passing pointer variables between functions,
but such a refactoring was beyond the scope here.
- Use [] to declare array of pointers, as array<T*> is not supported in MSL.
- Add unit test shaders.
GLSL and RelaxedPrecision are quite different in what they affect.
RelaxedPrecision affects operations, while this is merely implied in
GLSL based on inputs.
This leads to situations where we have to promote mediump inputs to
highp, and the simplest approach is to force highp temporaries for
inputs which are consumed in a highp context. For completeness, we also
demote RelaxedPrecision inputs to mediump variables.
PHI is handled by copying the PHI into a temporary.
We have to be very careful with hoisted temporaries, since the child
temporary will not be analyzed up-front. We inherit the hoisted-ness
state and emit the hoisted child temporary as necessary. When faking the
temporaries with OpCopyObject, we make sure to block any variable
hoisting.
Hoisting children of PHI variables is fine, since PHIs are not hoisted with
the same framework as other temporaries.
Introduces an idea of a recompilation making forward progress.
There are some extreme edge cases where we need more than 3 loops, but
only allow this in specific circumstances where we can reason about
forward progress being made.
Now we added block.cases_32bit as requested and we only parse if the
remaining ops are a multiple of 2. None of them are mutable because we
return a reference of them depending of the op.condition width.
Signed-off-by: Sebastián Aedo <saedo@codeweavers.com>
SPIR-V allows an image to be marked as a depth image, but with a non-depth
format. Such images should be read or sampled as vectors instead of scalars,
except when they are subject to compare operations.
Don't mark an OpSampledImage as using a compare operation just because the
image contains a depth marker. Instead, require that a compare operation
is actually used on that image.
Compiler::image_is_comparison() was really testing whether an image is a
depth image, since it incorporates the depth marker. Rename that function
to is_depth_image(), to clarify what it is really testing.
In Compiler::is_depth_image(), do not treat an image as a depth image
if it has been explicitly marked with a color format, unless the image
is subject to compare operations.
In CompilerMSL::to_function_name(), test for compare operations
specifically, rather than assuming them from the depth-image marker.
CompilerGLSL and CompilerMSL still contain a number of internal tests that
use is_depth_image() both for testing for a depth image, and for testing
whether compare operations are being used. I've left these as they are
for now, but these should be cleaned up at some point.
Add unit tests for fetch/sample depth images with color formats and no compare ops.
This is somewhat awkward to support, but the best effort we can do here
is to analyze various Load/Store opcodes and deduce the ideal overall
alignment based on this. This is not a 100% perfect solution, but should
be correct for any reasonable use case.
Also fix various nitpicks with BDA support while I'm at it.
Moving out the logic from the parser as requested because it's sensitive
to try to keep the parsing the most simple process as said.
For that, the load_types is now tracked in the ParsedIR, which can be
accessed in the Compiler struct. The switch cases are fixed in the CFG
stage since that's the point where the nullptr is deref.
Signed-off-by: Sebastián Aedo <saedo@codeweavers.com>
In some cases, we need to get a literal value from a spec constant op.
Mostly relevant when emitting buffers, so implement a 32-bit integer
scalar subset of the evaluator. Can be extended as needed to support
evaluating any specialization constant operation.
When inside a loop, treat any read of outer expressions to happen
multiple times, forcing a temporary of said outer expressions.
This avoids the problem where we can end up relying on loop-invariant code motion to happen in the
compiler when converting optimized shaders.
MoltenVK tessellation needs to be able to identify when a shader has declared
an output built-in, but does not populate it, in order to keep the expectations
about how intermediary buffers are populated aligned between tessellation stages.
Some fallout where internal functions are using stronger types.
Overkill to move everything over to strong types right now, but perhaps
move over to it slowly over time.
This was straightforward to implement in GLSL. The
`ShadingRateInterlockOrderedEXT` and `ShadingRateInterlockUnorderedEXT`
modes aren't implemented yet, because we don't support
`SPV_NV_shading_rate` or `SPV_EXT_fragment_invocation_density` yet.
HLSL and MSL were more interesting. They don't support this directly,
but they do support marking resources as "rasterizer ordered," which
does roughly the same thing. So this implementation scans all accesses
inside the critical section and marks all storage resources found
therein as rasterizer ordered. They also don't support the fine-grained
controls on pixel- vs. sample-level interlock and disabling ordering
guarantees that GLSL and SPIR-V do, but that's OK. "Unordered" here
merely means the order is undefined; that it just so happens to be the
same as rasterizer order is immaterial. As for pixel- vs. sample-level
interlock, Vulkan explicitly states:
> With sample shading enabled, [the `PixelInterlockOrderedEXT` and
> `PixelInterlockUnorderedEXT`] execution modes are treated like
> `SampleInterlockOrderedEXT` or `SampleInterlockUnorderedEXT`
> respectively.
and:
> If [the `SampleInterlockOrderedEXT` or `SampleInterlockUnorderedEXT`]
> execution modes are used in single-sample mode they are treated like
> `PixelInterlockOrderedEXT` or `PixelInterlockUnorderedEXT`
> respectively.
So this will DTRT for MoltenVK and gfx-rs, at least.
MSL additionally supports multiple raster order groups; resources that
are not accessed together can be placed in different ROGs to allow them
to be synchronized separately. A more sophisticated analysis might be
able to place resources optimally, but that's outside the scope of this
change. For now, we assign all resources to group 0, which should do for
our purposes.
`glslang` doesn't support the `RasterizerOrdered` UAVs this
implementation produces for HLSL, so the test case needs `fxc.exe`.
It also insists on GLSL 4.50 for `GL_ARB_fragment_shader_interlock`,
even though the spec says it needs either 4.20 or
`GL_ARB_shader_image_load_store`; and it doesn't support the
`GL_NV_fragment_shader_interlock` extension at all. So I haven't been
able to test those code paths.
Fixes#1002.
When merging combined image samplers, we only looked at sampler, but DXC
emits RelaxedPrecision only for texture. Does not hurt to check for more
things.
We were going down a tree of expressions multiple times and this caused
an exponential explosion in time, which was not caught until recently.
Fix this by blocking any traversal going through an ID more than one
time.
This fix overall improves performance by almost an order of magnitude on a
particular test shader rather than slowing it down by ~75x.