This adds two passes to accomplish this: one pass to analyze a shader
to determine the input slots that are live. The second pass is run on
the preceding shader to eliminate any stores to output slots that are
not consumed by the following shader.
These passes support vert, tesc, tese, geom, and frag shaders.
These passes are currently only available through the API.
These passes together with dead code elimination, and elimination of
dead input and output components and variables (WIP), will allow users
to do dead code elimination across shader boundaries.
Fixes#4918
* Prevent block merging from producing an invalid case construct by
merging a switch target/default with another construct's merge or
continue block
* This is to satisfy the structural dominance requirement between the
switch header and the case constructs
Fixes#4671
Fixes https://crbug.com/oss-fuzz/43265
* Only validate full layout in a vulkan environment
* Universal validation still checks that the right decorations are
present, but the values are only considered for vulkan
* One exception is that invalid overlaps are only checked for vulkan
* This is a pragmatic choice as SPIR-V doesn't define the size of
types so the amount of universal checking would be quite limited
* Removed redundant check for GLSLShared and GLSLPacked decorations
* Should never have been validated as part of universal validation
* make conditionals independent
Fixes https://crbug.com/oss-fuzz/48553
* Assign a reflexive dominator if no other dominator can be found using
forward traversals
* This prevents a null dereference of a pointer in the sorting of the
output
* Support Narrow Types in BitCast Folding Rule
This change adds support for narrow types in the BitCastScalarOrVector
folding rule. According to Section 2.2.1 of the SPIR-V spec, types that
are narrower than 32 bits are automatically either sign extended, or
zero extended depending on the type. With that guaranteed, we should
be able to use the first 32-bit word of any narrow type for the folding
logic without performing any special conversions.
In order to reduce code duplication, this change moves the
GetU32BitValue and GetU64BitValue functions from IntConstant to
ScalarConstant. Without this move, we would have needed an identical
version of GetU32BitValue on FloatConstant.
* Add Tests for 16-bit BitCast Folding
This change adds several new test cases to the
IntegerInstructionFoldingTest which trigger the 16-bit BitCast logic.
The logic for half types was also added to the integer case since we
can't easily validate half float types in C++ code. It's easier to
validate them as unsigned integers instead. Pllus this also allows us
to verify the SPIR-V constant sign extension logic too.
* Add 8-Bit Folding Test Cases
This change adds a couple more test cases to the integer instruction
folding test suite in order to ensure that the BitCast logic also
works correctly with the Int8 shader capability.
The always-friendly messages make it harder to debug when the
disassembly is later generated without friendly names.
Additionally, the friendly-name-mapper is slow. Disabling it improves
performance of an ANGLE test that creates numerous shaders by ~5%.
Half the messages used to output 'id[%name]' and half id[%name]. With
this change, all messages consistently output 'id[%name]'. Some typos
are also fixed in the process.
This was spotted in the Validation Layers where OpSpecConstantOp %x CompositeExtract %y 0 was being folded to a constant, but anything that was using it wasn't recognizing it as a constant, the simple fix was to add a const_mgr->MapInst(new_const_inst); so the next instruction knew it was a const
* Remove `spvOpcodeTerminatesExecution`
This function is the same as `spvOpcodeIsAbort` except for
OpUnreachable. The names are so close in meaning that it is hard to
distinguish them. I've removed `spvOpcodeTerminatesExecution` since it
is used in only a single place. I've special cased OpUnreachable in
that location.
At the same time, I fixed up some comments related to the use of the
TerminatesExecution and IsAbort functions.
Following up on #4930.
* Fix comments
Removed now unused DebugDeclare visibility logic for generating
DebugValue.
Also eliminated the phi sort introduced in 272e4b3. This should have
been removed in the first commit.
Changed a couple small parts of the algorithm to reduce time to build
the dominator trees. There should be no visible changes.
Add a depth first search algorithm that does not run a function on
backedges. The check if an edge is a back edge is time consuming, and
pointless if the function run on it is a nop.
Add name annotations to the generated instrumentation code to
make it easier to understand. Example spirv-cross output:
vec4 _140;
if (0u < inst_bindless_direct_read_4(0u, 0u, 1u, uint(_19)))
{
_140 = texture(textures[nonuniformEXT(_19)], inUV);
}
else
{
inst_bindless_stream_write_4(50u, 1u, uint(_19), 0u);
_140 = vec4(0.0);
}
* Improve algorithm to reorder blocks in a function
In dead branch elimination, blocks can end up in a the wrong order, so
there is code to reorder the blocks in structured order. The problem is
that the algorithm to do that is very poor. It involves many searchs in
the function for the correct position to place the block, as well as
moving many block in the vector.
The solution is to write a specialized function in the function class
that will reorder the blocks in structured order. After computing the
structured order, reordering the block can be done in linear time, with
very little overhead.
* spirv-opt: fix copy-propagate-arrays index opti on structs.
As per SPIR-V spec:
OpAccessChain indices must be OpConstant when indexing into a structure.
This optimization tried to remove load cascade. But in some scenario
failed:
```c
cbuffer MyStruct {
uint my_field;
};
uint main(uint index) {
const uint my_array[1] = { my_field };
return my_array[index]
}
```
This is valid as the struct is indexed with a constant index, and then
the array is indexed using a dynamic index.
The optimization would consider the local array to be useless and
generated a load directly into the struct.
* spirv-opt: prevent creation of unused instructions
Copy-propagate-arrays optimization pass would create unused constants,
even if the optimization not completed.
This was caused by the way we handled OpAccessChain squashing: we
only referenced constants, and had to create them upfront.
Fixes#4887
Signed-off-by: Nathan Gauër <brioche@google.com>
Specifically, DebugSourceContinued, DebugCompilationUnit, and
DebugEntryPoint. These instructions are top-level instructions
which do not or may not have a user except for the tool and so
should not be eliminated.
When folding a vector shuffle with an undef literal, it is possible that the
literal is adjusted so that it will then be interpreted as an index into
the input operands. This is fixed by special casing that case, and not
adjusting those operands.
Fixes#4859
Disassembler was called with non-default params, loosing FRIENDLY_NAMES.
This commit changes the call options to allow the spirv-opt to show
friendly names instead of raw-ids. Might be more helpful when reading
the SPIRV-opt output.
Fixes#4882
Signed-off-by: Nathan Gauër <brioche@google.com>
An access chain instruction interpretes its index operands as signed.
The composite insert and extract instruction interpret their index
operands as unsigned, so it is not possible to represent a negative
number.
This commit adds a check to the local-access-chain-convert pass to check
for a negative number in the access chain and to not do the conversion.
Fixes#4856
These asserts are not valid for string literals, which may
contain several words:
assert(a_operand.words.size() == 1);
assert(b_operand.words.size() == 1);
It looks like they only make sense for the default case, which
assumes that both operands contain a single word.
Running a debug version of spirv-diff on any shader containing
a string literal will hit the original asserts.
A recent libc++ roll in Chrome warned of a deprecated copy. We're
still looking if this is a bug in libc++ or a valid warning, but
removing the redundant line is a safe workaround or fix in either
case.
See discussion in https://crrev.com/c/3791771
* spirv-as: Avoid overflow when parsing exponents on hex floats
When an exponent is so large that it would overflow the int
type in the parser, saturate the exponent.
This allows extremely large exponents, and saturates
to infinity when the exponent is positive, and zero when the exponent
is negative.
Fixes#4721.
* Avoid unexpected narrowing conversions from arithmetic operations
Co-authored-by: Alastair F. Donaldson <alastair.donaldson@imperial.ac.uk>
Excessive whitespace can lead to stack overflow during parsing as each
character of skipped whitespace involves a recursive call. An
iterative solution avoids this.
Fixes#4729.
This reverts commit d18d0d92e5.
This is reverted because it causes a 7X slowdown when legalizing
SPIR-V with NonSemantic.Shader.DebugInfo.100 instructions.
This is due to the creation of very large UseLists for several
heavily used operands for this extension combined with the fact
that the original commit changed the performance of Uselists to O(n).
Fix code that is traversing def-use user structure at the same time
that it is changing it. This is dicey at best and error prone at worst.
This was uncovered making a change to the id_to_user representation.
Fixes https://crbug.com/oss-fuzz48578
* Adds structural reachability to basic blocks
* calculated in same manner as reachability, but using structural
successors
* Change structured CFG validation to use structural reachability
instead of reachability
* Fix some invalid reducer cases
The fuzzer cretes code with very large array, and scalar replacement
times out. Adding a limit on the size of the composites that will be
split when fuzzing.
Fixes https://crbug.com/oss-fuzz/48630
Arrays do not have to have a size that is known at compile time. It
could be a spec constant. In these cases, treat the array
as if it is arbitrarily long. This commit will treat it like it is an
array of size UINT32_MAX.
Fixes https://crbug.com/oss-fuzz/47397.
If the `instruction` operand in an extended instruction instruction is
too large, it causes undefined behaviour when that value is cast to the
enum for the corresponding set. This is done with the
NonSemanticDebug100 instruction set. We need to avoid the undefined
behaviour.
Fixes#4727
Which functions are processed is determined by which ones are on the
call tree from the entry points before dead code is removed. So it is
possible that a function is process because it is called from an entry
point, but the CFG is not cleaned up because the call to the function
was removed.
The fix is to process and cleanup every function in the module. Since
all of the dead functions would have already been removed in an earlier
step of DCE, it should not make a different in compile time.
Fixes#4731
* ANGLE builds with -Werror,-Wunreachable-code-loop-increment which had
a problem with grabbing the first construct in a loop
* Change the code to just use begin instead
* Structural dominance introduced in SPIR-V 1.6 rev2
* Changes the structured cfg validation to use structural dominance
* structural dominance is based on a cfg where merge and continue
declarations are counted as graph edges
* Basic blocks now track structural predecessors and structural
successors
* Add validation for entry into a loop
* Fixed an issue with inlining a single block loop
* The continue target needs to be moved to the latch block
* Simplify the calculation of structured exits
* no longer requires block depth
* Update many invalid tests
When the `SPV_BINARY_TO_TEXT_OPTION_PRINT` option is specified, `spvtext` will not be created (see 37d2396cab/source/disassemble.cpp (L117)), and the attempt to dereference into its members (`spvtext->str` and `spvtext->length`) results in segmentation fault.
This is fixed by first checking if `spvtext` is nulll.
Co-authored-by: Steven Perron <stevenperron@google.com>
We currently build the structured order for all nodes reachable from the
loop header when unrolling a loop. However, unrolling only needs the
nodes in the loop and possibly the merge node.
To avoid needless computation, I have implemented a search that will
stop at the merge node.
Fixes#4827
Uses a preprocessor macro to bail out of unrolling loops with large
iteration counts during fuzzing, to reduce the number of
timeouts/memouts that arise.
Related issue: #4728.
The code in `CFG::SplitLoopHeader` assumes the loop header is not the
latch. This leads to it not being able to find the latch block. This
has been fixed, and a test added.
Fixes#4527
If the predecessor blocks are the same, then there is only 1 value for the
OpPhi. The simplition pass will simplify it, and it causes problems for
if-conversion. In these cases, if-conversion can just punt.
Fixes#3554.
There is an assert that verifies that the binary did not change when the
optimizer said that it did not. However, if the input binary is in big endian
format, the optimizer will encode the optimized binary in little endian. This
causes the assert to fail. Since we do not believe that anybody cares about a
big endien formate, we will disable the assert in that case.
Fixes#4722
The Reciprocal function expects a divide-by-0 to return nan,and then Reciprocal will return 0.
Since the divide-by-0 is actually undefined, we will identify this case early, and return 0.
No new tests are needed because we already tests folding divide-by-0.
Fixes#4715
An access chain could have a constant index that is an out of bounds
access. This is valid spir-v, even if it can cause problems at runtime.
However, it is not valid to have an OpCompositeExtract with an out of
bounds access. This means we have to stop local-access-chain-convert
from making that change.
Fixes#4605
A std::set is used instead of std::vector, where the elements are
ordered by member index first. Decorations for fields are now looked up
by going over the range of decorations for the member only, instead of
the whole set.
In an ANGLE test that generates a struct with 4096 members, validation
goes down from ~140ms to ~90ms. On debug builds, the difference is more
pronounced, going down from ~2.5s to ~600ms.
This change adds a folding rule which transforms x * y - a and a - x * y
into FMA(x, y, -a) and FMA(-x, y, a), respectively.
While the SPIR-V instruction count remains the same, target instruction
sets typically feature FMA instruction variants that can negate an
operand. Also this transformation may unlock further optimizations which
eliminate the negation.
(Google bug: b/226145988)
* Add move folding for composite instructions
Fold chains of insert into construct
If a chain of OpCompositeInsert instruction write to every element of a
composite object, then we can replace it with an OpCompositeConstruct.
Fold a construct fed by extracts to a single extract
We already fold an OpCompositeConstruct when it is simlpy reconstructing
an object that was decomposed by a series of OpCompositeExtract
instructions. However, we do not do that if that object is an element
of a larger object.
I have updated the rule, so that if the original object is a an element
of a larger object, then the OpCompositeConstruct is replaced with a
single OpCompositeExtract from the larger object.
Fixes#4371.