Adding a new type instruction required making the same change in two
places.
Also remove IsCompileTimeConstantInst that was used in a single place
and replace its use by an expression that better conveys the intent.
Change-Id: I49330b74bd34a35db6369c438c053224805c18e0
Signed-off-by: Kevin Petit <kevin.petit@arm.com>
* Fix null pointer in FoldInsertWithConstants.
Struct types are not supported in constant folding yet.
* Added 'Test case 16' to fold_test.
Tests OpCompositeInsert not to be folded on a struct type.
-Make more use of InstructionBuilder instruction helper methods
-Use MakeUnique<>() rather than new
-Add InstrumentPass::GenReadFunctionCall() which optimizes function
calls in a loop with constant arguments and no side effects.
This is a prepatory change for future work on the instrumentation
code which will add more generated functions.
Avoid using OpConstantNull with types that do not allow it.
Update existing tests for slight changes in code generation.
Add new tests based on the Vulkan Validation layer test case
that exposed this problem.
This can cause interface incompatibility and should only be done
if ADCE has been applied to the following shader in the pipeline.
For this reason this capability is not available through the CLI
but rather only non-default through the API. This functionality is
intended as part of a larger cross-shader dead code elimination
sequence.
Constexpr guaranteed no runtime init in addition to const semantics.
Moving all opt/ to constexpr.
Moving all compile-unit statics to anonymous namespaces to uniformize
the method used (anonymous namespace vs static has the same behavior
here AFAIK).
Signed-off-by: Nathan Gauër <brioche@google.com>
Safe version will only optimize vertex shaders. All other shaders will
succeed without change.
Change --eliminate-dead-input-components to use new safe version.
Unsafe version (allowing non-vertex shaders) currently only available
through API. Should only be used in combination with other optimizations
to keep interfaces consistent. See optimizer.hpp for more details.
Add a flags field at the first offset within this buffer.
Define flags to allow buffer OOB checking to be enabled or
disabled at run time. This is to support VK_EXT_pipeline_robustnes.
This pass eliminates components of output variables that are not stored
to. Currently this just eliminates trailing components of arrays and
structs, all of which are dead.
WARNING: This pass is not designed to be a standalone pass as it can
cause interface incompatibiliies with the following shader in the
pipeline. See the comment in optimizer.hpp for best usage. This pass is
currently available only through the API; it is not available in the CLI.
This commit also fixes a bug in CreateDecoration() which is part of the
system of generating SPIR-V from the Type manager.
This adds two passes to accomplish this: one pass to analyze a shader
to determine the input slots that are live. The second pass is run on
the preceding shader to eliminate any stores to output slots that are
not consumed by the following shader.
These passes support vert, tesc, tese, geom, and frag shaders.
These passes are currently only available through the API.
These passes together with dead code elimination, and elimination of
dead input and output components and variables (WIP), will allow users
to do dead code elimination across shader boundaries.
Fixes#4918
* Prevent block merging from producing an invalid case construct by
merging a switch target/default with another construct's merge or
continue block
* This is to satisfy the structural dominance requirement between the
switch header and the case constructs
* Support Narrow Types in BitCast Folding Rule
This change adds support for narrow types in the BitCastScalarOrVector
folding rule. According to Section 2.2.1 of the SPIR-V spec, types that
are narrower than 32 bits are automatically either sign extended, or
zero extended depending on the type. With that guaranteed, we should
be able to use the first 32-bit word of any narrow type for the folding
logic without performing any special conversions.
In order to reduce code duplication, this change moves the
GetU32BitValue and GetU64BitValue functions from IntConstant to
ScalarConstant. Without this move, we would have needed an identical
version of GetU32BitValue on FloatConstant.
* Add Tests for 16-bit BitCast Folding
This change adds several new test cases to the
IntegerInstructionFoldingTest which trigger the 16-bit BitCast logic.
The logic for half types was also added to the integer case since we
can't easily validate half float types in C++ code. It's easier to
validate them as unsigned integers instead. Pllus this also allows us
to verify the SPIR-V constant sign extension logic too.
* Add 8-Bit Folding Test Cases
This change adds a couple more test cases to the integer instruction
folding test suite in order to ensure that the BitCast logic also
works correctly with the Int8 shader capability.
This was spotted in the Validation Layers where OpSpecConstantOp %x CompositeExtract %y 0 was being folded to a constant, but anything that was using it wasn't recognizing it as a constant, the simple fix was to add a const_mgr->MapInst(new_const_inst); so the next instruction knew it was a const
* Remove `spvOpcodeTerminatesExecution`
This function is the same as `spvOpcodeIsAbort` except for
OpUnreachable. The names are so close in meaning that it is hard to
distinguish them. I've removed `spvOpcodeTerminatesExecution` since it
is used in only a single place. I've special cased OpUnreachable in
that location.
At the same time, I fixed up some comments related to the use of the
TerminatesExecution and IsAbort functions.
Following up on #4930.
* Fix comments
Removed now unused DebugDeclare visibility logic for generating
DebugValue.
Also eliminated the phi sort introduced in 272e4b3. This should have
been removed in the first commit.
Changed a couple small parts of the algorithm to reduce time to build
the dominator trees. There should be no visible changes.
Add a depth first search algorithm that does not run a function on
backedges. The check if an edge is a back edge is time consuming, and
pointless if the function run on it is a nop.
Add name annotations to the generated instrumentation code to
make it easier to understand. Example spirv-cross output:
vec4 _140;
if (0u < inst_bindless_direct_read_4(0u, 0u, 1u, uint(_19)))
{
_140 = texture(textures[nonuniformEXT(_19)], inUV);
}
else
{
inst_bindless_stream_write_4(50u, 1u, uint(_19), 0u);
_140 = vec4(0.0);
}
* Improve algorithm to reorder blocks in a function
In dead branch elimination, blocks can end up in a the wrong order, so
there is code to reorder the blocks in structured order. The problem is
that the algorithm to do that is very poor. It involves many searchs in
the function for the correct position to place the block, as well as
moving many block in the vector.
The solution is to write a specialized function in the function class
that will reorder the blocks in structured order. After computing the
structured order, reordering the block can be done in linear time, with
very little overhead.
* spirv-opt: fix copy-propagate-arrays index opti on structs.
As per SPIR-V spec:
OpAccessChain indices must be OpConstant when indexing into a structure.
This optimization tried to remove load cascade. But in some scenario
failed:
```c
cbuffer MyStruct {
uint my_field;
};
uint main(uint index) {
const uint my_array[1] = { my_field };
return my_array[index]
}
```
This is valid as the struct is indexed with a constant index, and then
the array is indexed using a dynamic index.
The optimization would consider the local array to be useless and
generated a load directly into the struct.
* spirv-opt: prevent creation of unused instructions
Copy-propagate-arrays optimization pass would create unused constants,
even if the optimization not completed.
This was caused by the way we handled OpAccessChain squashing: we
only referenced constants, and had to create them upfront.
Fixes#4887
Signed-off-by: Nathan Gauër <brioche@google.com>
Specifically, DebugSourceContinued, DebugCompilationUnit, and
DebugEntryPoint. These instructions are top-level instructions
which do not or may not have a user except for the tool and so
should not be eliminated.
When folding a vector shuffle with an undef literal, it is possible that the
literal is adjusted so that it will then be interpreted as an index into
the input operands. This is fixed by special casing that case, and not
adjusting those operands.
Fixes#4859
Disassembler was called with non-default params, loosing FRIENDLY_NAMES.
This commit changes the call options to allow the spirv-opt to show
friendly names instead of raw-ids. Might be more helpful when reading
the SPIRV-opt output.
Fixes#4882
Signed-off-by: Nathan Gauër <brioche@google.com>
An access chain instruction interpretes its index operands as signed.
The composite insert and extract instruction interpret their index
operands as unsigned, so it is not possible to represent a negative
number.
This commit adds a check to the local-access-chain-convert pass to check
for a negative number in the access chain and to not do the conversion.
Fixes#4856