This commit checks the following when Shader capability exists:
"The FPRoundingMode decoration can be applied only to a width-only
conversion instruction that is used as the Object operand of an
OpStore storing through a pointer to a 16-bit floating-point object
in the StorageBuffer, Uniform, PushConstant, Input, or Output
Storage Classes.".
The HlslCounterBufferGOOGLE that was introduced changed the OpDecorateId
so that is can now reference an id other than the target. If that other
id is used only in the decoration, then the definition of the id will be
removed because decoration do not count as real uses.
However, if the target of the decoration is still live the decoration
will not be removed. This leaves a reference to an id that is not
defined.
There are two solutions to consider. The first is that is the decoration
is kept, then the definition of the id should be kept live. Implementing
this change would be involved because the way ADCE handles decorations
will have to be reimplemented.
The other solution is to remove the decoration the id is otherwise dead.
This works for this specific case. Also this is the more desirable
behaviour in this case. The id will always be the id of a variable that
belongs to a descriptor set. If that variable is not bound and we do
not remove it, the driver will complain.
I chose to implement the second solution. The first will be left to when
a case for it comes up.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1885.
This commit will change the message for unknown extensions from an error
to a warning.
Code was added to limit the number of warning messages so that consummer
of the messages are not overwhelmed. This is standard practice in
compilers.
Many other issues were found at while looking into this. They have been
documented in #1950.
Fixes http://crbug.com/875547.
* Check rules from Execution Mode tables, 2.16.2 and the Vulkan
environment spec
* Allows MeshNV execution model with the following execution modes
* LocalSize, LocalSizeId, OutputPoints and OutputVertices
* Done to not break their validation
There are a few spots where copy propagate arrays is trying
to go from a Type to an id, but the type is not unique. When generating
code this pass needs specific ids, otherwise we get type mismatches.
However, the ambigous types means we can sometimes get the wrong type
and generate invalid code.
That code has been rewritten to not rely on the type manager, and just
look at the instructions instead.
I have opened https://github.com/KhronosGroup/SPIRV-Tools/issues/1939 to
try to get a way to make this more robust.
In DecorationManager::RemoveDecorationsFrom, we do not remove the id
from a decoration group if the group has no decorations. This causes
problems because KillNamesAndDecorates is suppose to remove all
references to the id, but in this case, there is still a reference.
This is fixed by adding a special case.
Also, there is the possibility of a double free because
RemoveDecorationsFrom will delete the instructions defining |id| when
|id| is a decoration group. Later, KillInst would later write to memory
that has been deleted when trying to turn it into a Nop. To fix this,
we will only remove the decorations that use |id| and not its definition
in RemoveDecorationsFrom.
OpPhi instruction must appear before all non-OpPhi instructions
except for OpLine. Without this commit, Validator does not check
the case that an OpPhi is preceeded by an OpLine and the OpLine is
preceeded by a non-OpPhi instruction that is not OpLine.
Checked all instructions whose object is OpTypeSampledImage or
OpTypeImage as suggested in #487. OpImageTexelPointer instruction
is missing and others look good. This commit adds only
OpImageTexelPointer.
This CL removes the use of SetContextMessageConsumer from the
binary_parse_test tests and creates a Context object and uses
SetMessageConsumer instead.
Instead of using the source/table.h methods, this CL switches the stats
tool to use the spvtools::Context class and assign the message consumer
through the public API.
A limit of 0 for the scalar replacement options it used to indicate that
there is no limit. The current implementation does not allow 0. This
should be fixed.
It seems like the current implementation of KillNameAndDecorates does
not handle group decorations correctly. The id being removed is not
removed from the OpGroupDecorate instructions. Even worst, any
decorations that apply to that group are removed.
The solution is to use the function in the decoration manager that will
remove the decorations and update the instructions instead of doing the
work itself.
Adds unrolling to the legalization passes.
After enabling unrolling I found a bug when there is a self-referencing
phi node. That has been fixed.
The test that checks for that the order of optimizations is correct also
needed to be updated.
The current implementation of merge return can create bad, but correct,
code. When it is not in a loop construct, it will insert a lot of
extra branch around code. The potentially large number of branches are
bad. At the same time, it can separate code store to variables from
its uses hiding the fact that the store dominates the load.
This hurts the later analysis because the compiler thinks that multiple
values can reach a load, when there is really only 1. This poorer
analysis leads to missed optimizations.
The solution is to create a dummy loop around the entire body of the
function, then we can break from that loop with a single branch. Also
only new merge nodes would be those at the end of loops meaning that
most analysies will not be hurt.
Remove dead code for cases that are no longer possible.
It seems like some drivers expect there the be an OpSelectionMerge
before conditional branches, even if they are not strictly needed.
So we add them.
* Create structed cfg analysis.
There are lots of optimization that have to traverse the CFG in a
structured order just because it wants to know which constructs a
basic block in contained in. This adds extra complexity to these
optimizations, for causes too much refactoring of older optimizations.
To help with this problem, I have written an analysis that can give this
information.
* Identify branches breaking from loops.
Dead branch elimination does a search for a conditional branch to the
end of the current selection construct. This search assumes that the
only way to leave the construct is through the merge node. But that is
not true. The code can jump to the merge node of a loop that contains
the construct.
The search needs to take this into consideration.
In merge blocks, we do not allow the merging of two blocks with merge
instructions. This is because if the two block are merged only 1 of
those instructions can exists. However, if the successor block is the
merge block of the predecessor, then we can delete the merge instruction
in the predecessor. In this case, we are able to merge the blocks.
* Create a new entry point for the optimizer
Creates a new struct to hold the options for the optimizer, and creates
an entry point that take the optimizer options as a parameter.
The old entry point that takes validator options are now deprecated.
The validator options will be one of the optimizer options.
Part of the optimizer options will also be the upper bound on the id bound.
* Add a command line option to set the max value for the id bound. The default is 0x3FFFFF.
* Modify `TakeNextIdBound` to return 0 when the limit is reached.
Support collapsed into one commit:
- Asm/Dis support for SPV_KHR_vulkan_memory_model
- Add Vulkan mem model image operands to switch
- Add TODO for source/validate_image.cpp
- val: Image operands NonPrivateTexelKHR, VolatileTexelKHR have no operands
This is required for memory model tests to pass SPIR-V validation.
- Round trip tests: Test new flags on OpCopyMemory*
This splits the spvtools_config into a public and private part to avoid
leaking internal bits to dependents. A new target is added for the
public headers so that "gn check" works for dependents.
Also formats test/fuzzers/BUILD.gn
* Validate all type ids.
The validator does not check if the type of an instruction is actually
a type unless the OpCode has a specific requirement. For example,
OpFAdd is checked, but OpUndef is not.
The commit add a generic check that if there is a type id then the id
defines a type.
http://crbug.com/876694
* Merge other checks for type into new one.
There are a couple check that the type id is a type for specific
opcodes. Those have been mereged into 1.
Small changes to other test cases to make them valid enough for the
purpose of the test.
In the specification of `OpTypeFunction`, it says
> OpFunction is the only valid use of OpTypeFunction.
This commit add a check in the validator for this rule.
A test started to fail because the new check happens before the check
the test case is testing. Updated the test case to still fail the
check it was suppose to fail originally.
http://crbug.com/874571
* Copy decorations when creating new ids.
When creating a new value based on an old value, we need to copy the
decorations to the new id. This change does this in 3 places:
1) The variable holding the return value of the function generated by
merge return should get decorations from the function.
2) The results of the OpPhi instructions should get decorations from the
variable they are replacing in the ssa writer.
3) In local access chain convert the intermediate struct (result of
OpCompositeInsert) generated for the store replacement should get its
decorations from the variable being stored to.
Fixes#1787.
If seems like at least 1 driver does not like a condition jump to the end
of a selection construct. We are generating these in the merge return
pass. This change stops merge return from generating this sequence.
Part of #1861.
When doing predicate blocks, we need to traverse every block in
structured order in order to keep track of which construct a block is
contained in. The standard way of traversing code in structured order
is to create a list with all of the nodes in order. However, when
predicating blocks, new blocks are created, and those blocks are missed.
This causes branches that go too far.
The solution is to update the order as new blocks are created. Since
we are using an std::list, we do not have to worry about invalidation of
iterators when changing the list.
* Split constant opcode validation out of idUsage and into
validate_constants.cpp
* minor style fixes
* reduced duplication
* fixed an issue with array sizing
* Refactor PredicateBlocks
Refactor PredicateBlocks so that we know which constructs a return
is contained in. Will be used later.
* Have PredicateBlocks jump the existing merge blocks.
In PredicateBlocks, we currently skip instructions with side effects,
but it still follows the same control flow (sort-of). This causes a
problem, when we are trying to predicate code in a loop. We skip all
of the code with side effects (IV increment), but still follow the
same control flow (jump back the start of the loop). This creates an
infinite loop because the code will keep jumping back to the start of
the loop without changing the values that effect the exit condition.
This is a large change to merge-return. When predicating a block that
is in a loop or merge construct, it will jump to the merge block of the
construct. Once out of all constructs we will generate code as we did
before.
* Handle breaks from structured-ifs in DCE.
dead code elimination assumes that are conditional branches except for
breaks and continues in loops will have an OpSelectionMerge before them.
That is not true when breaking out of a selection construct.
The fix is to look for breaks in selection constructs in the same place
we look for breaks and continues for loops.
When dead-branch-elim folds a conditional branch, it also deletes the
OpSelectionMerge instruction. If that construct contains a
conditional branch to the merge node, it will not have its own
OpSelectionMerge. When the headers merge instruction is deleted, the
the inner conditional branch will no longer be legal. It will be a
selection to a node that is not a merge node.
We fix this up by moving the OpSelectionMerge to a new location if it is
still needed.