A limit of 0 for the scalar replacement options it used to indicate that
there is no limit. The current implementation does not allow 0. This
should be fixed.
It seems like the current implementation of KillNameAndDecorates does
not handle group decorations correctly. The id being removed is not
removed from the OpGroupDecorate instructions. Even worst, any
decorations that apply to that group are removed.
The solution is to use the function in the decoration manager that will
remove the decorations and update the instructions instead of doing the
work itself.
Adds unrolling to the legalization passes.
After enabling unrolling I found a bug when there is a self-referencing
phi node. That has been fixed.
The test that checks for that the order of optimizations is correct also
needed to be updated.
We currently register decorations in the first pass through the
instructions. This is a problem because the validator has not even
checked if the decoration instructions are valid yet. This can lead to
unexpected behaviour from these side table. For example, in
https://github.com/KhronosGroup/SPIRV-Tools/issues/1882, we use 5GB of
data to store 1 decoration for ids that are not even defined.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1882.
The current implementation of merge return can create bad, but correct,
code. When it is not in a loop construct, it will insert a lot of
extra branch around code. The potentially large number of branches are
bad. At the same time, it can separate code store to variables from
its uses hiding the fact that the store dominates the load.
This hurts the later analysis because the compiler thinks that multiple
values can reach a load, when there is really only 1. This poorer
analysis leads to missed optimizations.
The solution is to create a dummy loop around the entire body of the
function, then we can break from that loop with a single branch. Also
only new merge nodes would be those at the end of loops meaning that
most analysies will not be hurt.
Remove dead code for cases that are no longer possible.
It seems like some drivers expect there the be an OpSelectionMerge
before conditional branches, even if they are not strictly needed.
So we add them.
* Create structed cfg analysis.
There are lots of optimization that have to traverse the CFG in a
structured order just because it wants to know which constructs a
basic block in contained in. This adds extra complexity to these
optimizations, for causes too much refactoring of older optimizations.
To help with this problem, I have written an analysis that can give this
information.
* Identify branches breaking from loops.
Dead branch elimination does a search for a conditional branch to the
end of the current selection construct. This search assumes that the
only way to leave the construct is through the merge node. But that is
not true. The code can jump to the merge node of a loop that contains
the construct.
The search needs to take this into consideration.
When using lldb and/or gdb I frequently get odd std::string failures
when using the IR printing instructions we have now. This adds the
methods Instruction::Dump(), BasicBlock::Dump() and Function::Dump() to
emit the output of the pretty print to stderr.
With this I can now reliably print IR from gdb and lldb sessions.
In merge blocks, we do not allow the merging of two blocks with merge
instructions. This is because if the two block are merged only 1 of
those instructions can exists. However, if the successor block is the
merge block of the predecessor, then we can delete the merge instruction
in the predecessor. In this case, we are able to merge the blocks.
* Create a new entry point for the optimizer
Creates a new struct to hold the options for the optimizer, and creates
an entry point that take the optimizer options as a parameter.
The old entry point that takes validator options are now deprecated.
The validator options will be one of the optimizer options.
Part of the optimizer options will also be the upper bound on the id bound.
* Add a command line option to set the max value for the id bound. The default is 0x3FFFFF.
* Modify `TakeNextIdBound` to return 0 when the limit is reached.
Support collapsed into one commit:
- Asm/Dis support for SPV_KHR_vulkan_memory_model
- Add Vulkan mem model image operands to switch
- Add TODO for source/validate_image.cpp
- val: Image operands NonPrivateTexelKHR, VolatileTexelKHR have no operands
This is required for memory model tests to pass SPIR-V validation.
- Round trip tests: Test new flags on OpCopyMemory*
This splits the spvtools_config into a public and private part to avoid
leaking internal bits to dependents. A new target is added for the
public headers so that "gn check" works for dependents.
Also formats test/fuzzers/BUILD.gn
This reverts commit 3cddf9131d.
This breaks the Chrome roll:
ERROR Unresolved dependencies.
//third_party/SPIRV-Tools/src/test/fuzzers:fuzzer_config(//build/toolchain/linux:clang_x64)
needs //third_party/SPIRV-Tools/src:spvtools_config(//build/toolchain/linux:clang_x64)
GN gen failed: 1
This splits the spvtools_config into a public and private part to avoid
leaking internal bits to dependents. A new target is added for the
public headers so that "gn check" works for dependents.
* Validate all type ids.
The validator does not check if the type of an instruction is actually
a type unless the OpCode has a specific requirement. For example,
OpFAdd is checked, but OpUndef is not.
The commit add a generic check that if there is a type id then the id
defines a type.
http://crbug.com/876694
* Merge other checks for type into new one.
There are a couple check that the type id is a type for specific
opcodes. Those have been mereged into 1.
Small changes to other test cases to make them valid enough for the
purpose of the test.
In the specification of `OpTypeFunction`, it says
> OpFunction is the only valid use of OpTypeFunction.
This commit add a check in the validator for this rule.
A test started to fail because the new check happens before the check
the test case is testing. Updated the test case to still fail the
check it was suppose to fail originally.
http://crbug.com/874571
* Have the constant manager take ownership of constants.
Right now the owner of an object of type contant that is in the
|const_pool_| of the constant manager is unclear. The constant
manager does not delete them, there is no other reasonable owner. This
causes memory leaks.
This change fixes the memory leaks by having the constant manager
take ownership of the constant that is stores in |const_pool_|. Other
changes include interface changes to make it explicit that the constant
manager takes ownership of the object when a constant is registered
with the constant manager.
Fixes#1865.
Right now the owner of an object of type contant that is in the
|const_pool_| of the constant manager is unclear. The constant
manager does not delete them, there is no other reasonable owner. This
causes memory leaks.
This change fixes the memory leaks by having the constant manager
take ownership of the constant that is stores in |const_pool_|. Other
changes include interface changes to make it explicit that the constant
manager takes ownership of the object when a constant is registered
with the constant manager.
* Copy decorations when creating new ids.
When creating a new value based on an old value, we need to copy the
decorations to the new id. This change does this in 3 places:
1) The variable holding the return value of the function generated by
merge return should get decorations from the function.
2) The results of the OpPhi instructions should get decorations from the
variable they are replacing in the ssa writer.
3) In local access chain convert the intermediate struct (result of
OpCompositeInsert) generated for the store replacement should get its
decorations from the variable being stored to.
Fixes#1787.
If seems like at least 1 driver does not like a condition jump to the end
of a selection construct. We are generating these in the merge return
pass. This change stops merge return from generating this sequence.
Part of #1861.
When doing predicate blocks, we need to traverse every block in
structured order in order to keep track of which construct a block is
contained in. The standard way of traversing code in structured order
is to create a list with all of the nodes in order. However, when
predicating blocks, new blocks are created, and those blocks are missed.
This causes branches that go too far.
The solution is to update the order as new blocks are created. Since
we are using an std::list, we do not have to worry about invalidation of
iterators when changing the list.
* Split constant opcode validation out of idUsage and into
validate_constants.cpp
* minor style fixes
* reduced duplication
* fixed an issue with array sizing
* Refactor PredicateBlocks
Refactor PredicateBlocks so that we know which constructs a return
is contained in. Will be used later.
* Have PredicateBlocks jump the existing merge blocks.
In PredicateBlocks, we currently skip instructions with side effects,
but it still follows the same control flow (sort-of). This causes a
problem, when we are trying to predicate code in a loop. We skip all
of the code with side effects (IV increment), but still follow the
same control flow (jump back the start of the loop). This creates an
infinite loop because the code will keep jumping back to the start of
the loop without changing the values that effect the exit condition.
This is a large change to merge-return. When predicating a block that
is in a loop or merge construct, it will jump to the merge block of the
construct. Once out of all constructs we will generate code as we did
before.
* Handle breaks from structured-ifs in DCE.
dead code elimination assumes that are conditional branches except for
breaks and continues in loops will have an OpSelectionMerge before them.
That is not true when breaking out of a selection construct.
The fix is to look for breaks in selection constructs in the same place
we look for breaks and continues for loops.