* Support Narrow Types in BitCast Folding Rule
This change adds support for narrow types in the BitCastScalarOrVector
folding rule. According to Section 2.2.1 of the SPIR-V spec, types that
are narrower than 32 bits are automatically either sign extended, or
zero extended depending on the type. With that guaranteed, we should
be able to use the first 32-bit word of any narrow type for the folding
logic without performing any special conversions.
In order to reduce code duplication, this change moves the
GetU32BitValue and GetU64BitValue functions from IntConstant to
ScalarConstant. Without this move, we would have needed an identical
version of GetU32BitValue on FloatConstant.
* Add Tests for 16-bit BitCast Folding
This change adds several new test cases to the
IntegerInstructionFoldingTest which trigger the 16-bit BitCast logic.
The logic for half types was also added to the integer case since we
can't easily validate half float types in C++ code. It's easier to
validate them as unsigned integers instead. Pllus this also allows us
to verify the SPIR-V constant sign extension logic too.
* Add 8-Bit Folding Test Cases
This change adds a couple more test cases to the integer instruction
folding test suite in order to ensure that the BitCast logic also
works correctly with the Int8 shader capability.
The always-friendly messages make it harder to debug when the
disassembly is later generated without friendly names.
Additionally, the friendly-name-mapper is slow. Disabling it improves
performance of an ANGLE test that creates numerous shaders by ~5%.
Half the messages used to output 'id[%name]' and half id[%name]. With
this change, all messages consistently output 'id[%name]'. Some typos
are also fixed in the process.
This was spotted in the Validation Layers where OpSpecConstantOp %x CompositeExtract %y 0 was being folded to a constant, but anything that was using it wasn't recognizing it as a constant, the simple fix was to add a const_mgr->MapInst(new_const_inst); so the next instruction knew it was a const
Add name annotations to the generated instrumentation code to
make it easier to understand. Example spirv-cross output:
vec4 _140;
if (0u < inst_bindless_direct_read_4(0u, 0u, 1u, uint(_19)))
{
_140 = texture(textures[nonuniformEXT(_19)], inUV);
}
else
{
inst_bindless_stream_write_4(50u, 1u, uint(_19), 0u);
_140 = vec4(0.0);
}
* Improve algorithm to reorder blocks in a function
In dead branch elimination, blocks can end up in a the wrong order, so
there is code to reorder the blocks in structured order. The problem is
that the algorithm to do that is very poor. It involves many searchs in
the function for the correct position to place the block, as well as
moving many block in the vector.
The solution is to write a specialized function in the function class
that will reorder the blocks in structured order. After computing the
structured order, reordering the block can be done in linear time, with
very little overhead.
Using SinglePassRunAndMatch<> instead of SinglePassRunAndCheck<>
makes tests more concise and makes it possible to use pattern
matching features.
Using Effcee stateful pattern matching to make it less repetitive
to check for generated functions and global variables.
This approach isn't worth
it for DebugPrintf functions because the generated code will change
depending on how many parameters are passed to every debugPrintfEXT()
call.
* spirv-opt: fix copy-propagate-arrays index opti on structs.
As per SPIR-V spec:
OpAccessChain indices must be OpConstant when indexing into a structure.
This optimization tried to remove load cascade. But in some scenario
failed:
```c
cbuffer MyStruct {
uint my_field;
};
uint main(uint index) {
const uint my_array[1] = { my_field };
return my_array[index]
}
```
This is valid as the struct is indexed with a constant index, and then
the array is indexed using a dynamic index.
The optimization would consider the local array to be useless and
generated a load directly into the struct.
* spirv-opt: prevent creation of unused instructions
Copy-propagate-arrays optimization pass would create unused constants,
even if the optimization not completed.
This was caused by the way we handled OpAccessChain squashing: we
only referenced constants, and had to create them upfront.
Fixes#4887
Signed-off-by: Nathan Gauër <brioche@google.com>
Specifically, DebugSourceContinued, DebugCompilationUnit, and
DebugEntryPoint. These instructions are top-level instructions
which do not or may not have a user except for the tool and so
should not be eliminated.
When folding a vector shuffle with an undef literal, it is possible that the
literal is adjusted so that it will then be interpreted as an index into
the input operands. This is fixed by special casing that case, and not
adjusting those operands.
Fixes#4859
An access chain instruction interpretes its index operands as signed.
The composite insert and extract instruction interpret their index
operands as unsigned, so it is not possible to represent a negative
number.
This commit adds a check to the local-access-chain-convert pass to check
for a negative number in the access chain and to not do the conversion.
Fixes#4856
* spirv-as: Avoid overflow when parsing exponents on hex floats
When an exponent is so large that it would overflow the int
type in the parser, saturate the exponent.
This allows extremely large exponents, and saturates
to infinity when the exponent is positive, and zero when the exponent
is negative.
Fixes#4721.
* Avoid unexpected narrowing conversions from arithmetic operations
Co-authored-by: Alastair F. Donaldson <alastair.donaldson@imperial.ac.uk>
Excessive whitespace can lead to stack overflow during parsing as each
character of skipped whitespace involves a recursive call. An
iterative solution avoids this.
Fixes#4729.
This reverts commit d18d0d92e5.
This is reverted because it causes a 7X slowdown when legalizing
SPIR-V with NonSemantic.Shader.DebugInfo.100 instructions.
This is due to the creation of very large UseLists for several
heavily used operands for this extension combined with the fact
that the original commit changed the performance of Uselists to O(n).
Fixes https://crbug.com/oss-fuzz48578
* Adds structural reachability to basic blocks
* calculated in same manner as reachability, but using structural
successors
* Change structured CFG validation to use structural reachability
instead of reachability
* Fix some invalid reducer cases
Arrays do not have to have a size that is known at compile time. It
could be a spec constant. In these cases, treat the array
as if it is arbitrarily long. This commit will treat it like it is an
array of size UINT32_MAX.
Fixes https://crbug.com/oss-fuzz/47397.
If the `instruction` operand in an extended instruction instruction is
too large, it causes undefined behaviour when that value is cast to the
enum for the corresponding set. This is done with the
NonSemanticDebug100 instruction set. We need to avoid the undefined
behaviour.
Fixes#4727
Which functions are processed is determined by which ones are on the
call tree from the entry points before dead code is removed. So it is
possible that a function is process because it is called from an entry
point, but the CFG is not cleaned up because the call to the function
was removed.
The fix is to process and cleanup every function in the module. Since
all of the dead functions would have already been removed in an earlier
step of DCE, it should not make a different in compile time.
Fixes#4731