The Vulkan specification does not permit use of the VertexId and
InstanceId BuiltIn decorations, so add a check to ensure they are not
being used when the target environment is Vulkan.
* Add base and core bindless validation instrumentation classes
* Fix formatting.
* Few more formatting fixes
* Fix build failure
* More build fixes
* Need to call non-const functions in order.
Specifically, these are functions which call TakeNextId(). These need to
be called in a specific order to guarantee that tests which do exact
compares will work across all platforms. c++ pretty much does not
guarantee order of evaluation of operands, so any such functions need to
be called separately in individual statements to guarantee order.
* More ordering.
* And more ordering.
* And more formatting.
* Attempt to fix NDK build
* Another attempt to address NDK build problem.
* One more attempt at NDK build failure
* Add instrument.hpp to BUILD.gn
* Some name improvement in instrument.hpp
* Change all types in instrument.hpp to int.
* Improve documentation in instrument.hpp
* Format fixes
* Comment clean up in instrument.hpp
* imageInst -> image_inst
* Fix GetLabel() issue.
If there is only 1 return and it is in a loop, then the function cannot be inlined.
Fix condition when inlined code needs one-trip loop wrapper. The dummy loop is needed when there is a return inside a selection construct. Even if there is only 1 return.
* Validate the id bound.
Validates that the id bound for the module is not larger than the max id
bound. Also adds an option to set the max id bound. Allows the
optimizer option to set the max id bound to also set the id bound for
the validation run done by the optimizer.
Fixes#2030.
This CL takes the various opt unit tests and makes a single executable
instead of one per test. This reduces the number of build targets by
~125 when building with ninja.
When looking for a break from a selection construct, we do not realize
that a jump to the continue target of a loop containing the selection
is a break. This causes and infinit loop, or possibly other failures.
Fixes#2004.
When looking for a break from a selection construct, we do not need to
look inside nested constructs. However, if a loop header has an
unconditional branch, then we enter the loop. Entering the loop causes
an infinite loop because we keep going through the loop.
The solution is to look for a merge block, if one exsits, even for block
terminated by an OpBranch.
Fixes#1979.
The SPV_KHR_8bit_storage extension does not permit 8-bit integers to be
cast directly to floating point types. We are seeing shaders in the
wild, being produced by toolchains like glslang, that are generating
invalid SPIR-V.
This change adds validation to check for the patterns not permitted, and
some tests that expose the failure.
ADCE liveness algorithm should treat OpUnreachable at least like other
branch instructions. It was being treated as always live which was
preventing useless structured constructs from being eliminated.
OpUnreachable is generated by dead branch elimination which is now
being required by merge return, so this fix should accompany that
change.
We currently run merge-return on all functions, but
dead-branch-elimination only runs on function reachable from an entry
point or exported function. Since dead-branch-elimination is needed for
merge-return, they have to match.
Fixes#1976.
In logical addressing mode, we are not allowed to generate variables
pointers. There is already a check for OpSelect. However, OpPhi
and OpPtrAccessChain are not checked to make sure it does not
generate an variable pointer. I've added those checks.
Fixes#1957.
Was removing control structures which didn't have data dependency
with enclosed live loop and otherwise did not contain live code.
An example is a counting loop around a live loop.
Fixes#1967.
* MakePointerVisibleKHR cannot be used with OpStore
* MakePointerAvailableKHR cannot be used with OpLoad
* MakePointerAvailableKHR and MakePointerVisibleKHR both require
NonPrivatePointerKHR
* NonPrivatePointerKHR is limited to a subset of storage classes
* many tests
* Validation checks for new image operands MakeTexelAvailableKHR and
MakeTexelVisibleKHR
* added tests
* Tests that NonPrivateTexelKHR is accepted for all image operands
Updating test environments
* fixed build errors
* changed image types for *FetchSuccess tests to use a type defined in
1.3 shader body
Merge return assumes that the only unreachable blocks are those needed
to keep the structured cfg valid. Even those must be essentially empty
blocks.
If this is not the case, we get unpredictable behaviour. This commit
add a check in merge return, and emits an error if it is not the case.
Added a pass of dead branch elimination before merge return in both the
performance and size passes. It is a precondition of merge return.
Fixes#1962.
The current implementation in the folder when seeing a division by zero
is to assert. In the release build, the compiler will attempt to
compute the value, which causes its own problems.
The solution I will go with is to fold the division, and just give it
the value of 0. The same goes for remainder and mod operations.
Fixes#1961.
This commit checks the following when Shader capability exists:
"The FPRoundingMode decoration can be applied only to a width-only
conversion instruction that is used as the Object operand of an
OpStore storing through a pointer to a 16-bit floating-point object
in the StorageBuffer, Uniform, PushConstant, Input, or Output
Storage Classes.".
The HlslCounterBufferGOOGLE that was introduced changed the OpDecorateId
so that is can now reference an id other than the target. If that other
id is used only in the decoration, then the definition of the id will be
removed because decoration do not count as real uses.
However, if the target of the decoration is still live the decoration
will not be removed. This leaves a reference to an id that is not
defined.
There are two solutions to consider. The first is that is the decoration
is kept, then the definition of the id should be kept live. Implementing
this change would be involved because the way ADCE handles decorations
will have to be reimplemented.
The other solution is to remove the decoration the id is otherwise dead.
This works for this specific case. Also this is the more desirable
behaviour in this case. The id will always be the id of a variable that
belongs to a descriptor set. If that variable is not bound and we do
not remove it, the driver will complain.
I chose to implement the second solution. The first will be left to when
a case for it comes up.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1885.
This commit will change the message for unknown extensions from an error
to a warning.
Code was added to limit the number of warning messages so that consummer
of the messages are not overwhelmed. This is standard practice in
compilers.
Many other issues were found at while looking into this. They have been
documented in #1950.
Fixes http://crbug.com/875547.
* Check rules from Execution Mode tables, 2.16.2 and the Vulkan
environment spec
* Allows MeshNV execution model with the following execution modes
* LocalSize, LocalSizeId, OutputPoints and OutputVertices
* Done to not break their validation
There are a few spots where copy propagate arrays is trying
to go from a Type to an id, but the type is not unique. When generating
code this pass needs specific ids, otherwise we get type mismatches.
However, the ambigous types means we can sometimes get the wrong type
and generate invalid code.
That code has been rewritten to not rely on the type manager, and just
look at the instructions instead.
I have opened https://github.com/KhronosGroup/SPIRV-Tools/issues/1939 to
try to get a way to make this more robust.
In DecorationManager::RemoveDecorationsFrom, we do not remove the id
from a decoration group if the group has no decorations. This causes
problems because KillNamesAndDecorates is suppose to remove all
references to the id, but in this case, there is still a reference.
This is fixed by adding a special case.
Also, there is the possibility of a double free because
RemoveDecorationsFrom will delete the instructions defining |id| when
|id| is a decoration group. Later, KillInst would later write to memory
that has been deleted when trying to turn it into a Nop. To fix this,
we will only remove the decorations that use |id| and not its definition
in RemoveDecorationsFrom.
OpPhi instruction must appear before all non-OpPhi instructions
except for OpLine. Without this commit, Validator does not check
the case that an OpPhi is preceeded by an OpLine and the OpLine is
preceeded by a non-OpPhi instruction that is not OpLine.
Checked all instructions whose object is OpTypeSampledImage or
OpTypeImage as suggested in #487. OpImageTexelPointer instruction
is missing and others look good. This commit adds only
OpImageTexelPointer.
This CL removes the use of SetContextMessageConsumer from the
binary_parse_test tests and creates a Context object and uses
SetMessageConsumer instead.
Instead of using the source/table.h methods, this CL switches the stats
tool to use the spvtools::Context class and assign the message consumer
through the public API.
A limit of 0 for the scalar replacement options it used to indicate that
there is no limit. The current implementation does not allow 0. This
should be fixed.
It seems like the current implementation of KillNameAndDecorates does
not handle group decorations correctly. The id being removed is not
removed from the OpGroupDecorate instructions. Even worst, any
decorations that apply to that group are removed.
The solution is to use the function in the decoration manager that will
remove the decorations and update the instructions instead of doing the
work itself.
Adds unrolling to the legalization passes.
After enabling unrolling I found a bug when there is a self-referencing
phi node. That has been fixed.
The test that checks for that the order of optimizations is correct also
needed to be updated.
The current implementation of merge return can create bad, but correct,
code. When it is not in a loop construct, it will insert a lot of
extra branch around code. The potentially large number of branches are
bad. At the same time, it can separate code store to variables from
its uses hiding the fact that the store dominates the load.
This hurts the later analysis because the compiler thinks that multiple
values can reach a load, when there is really only 1. This poorer
analysis leads to missed optimizations.
The solution is to create a dummy loop around the entire body of the
function, then we can break from that loop with a single branch. Also
only new merge nodes would be those at the end of loops meaning that
most analysies will not be hurt.
Remove dead code for cases that are no longer possible.
It seems like some drivers expect there the be an OpSelectionMerge
before conditional branches, even if they are not strictly needed.
So we add them.
* Create structed cfg analysis.
There are lots of optimization that have to traverse the CFG in a
structured order just because it wants to know which constructs a
basic block in contained in. This adds extra complexity to these
optimizations, for causes too much refactoring of older optimizations.
To help with this problem, I have written an analysis that can give this
information.
* Identify branches breaking from loops.
Dead branch elimination does a search for a conditional branch to the
end of the current selection construct. This search assumes that the
only way to leave the construct is through the merge node. But that is
not true. The code can jump to the merge node of a loop that contains
the construct.
The search needs to take this into consideration.
In merge blocks, we do not allow the merging of two blocks with merge
instructions. This is because if the two block are merged only 1 of
those instructions can exists. However, if the successor block is the
merge block of the predecessor, then we can delete the merge instruction
in the predecessor. In this case, we are able to merge the blocks.
* Create a new entry point for the optimizer
Creates a new struct to hold the options for the optimizer, and creates
an entry point that take the optimizer options as a parameter.
The old entry point that takes validator options are now deprecated.
The validator options will be one of the optimizer options.
Part of the optimizer options will also be the upper bound on the id bound.
* Add a command line option to set the max value for the id bound. The default is 0x3FFFFF.
* Modify `TakeNextIdBound` to return 0 when the limit is reached.
Support collapsed into one commit:
- Asm/Dis support for SPV_KHR_vulkan_memory_model
- Add Vulkan mem model image operands to switch
- Add TODO for source/validate_image.cpp
- val: Image operands NonPrivateTexelKHR, VolatileTexelKHR have no operands
This is required for memory model tests to pass SPIR-V validation.
- Round trip tests: Test new flags on OpCopyMemory*
This splits the spvtools_config into a public and private part to avoid
leaking internal bits to dependents. A new target is added for the
public headers so that "gn check" works for dependents.
Also formats test/fuzzers/BUILD.gn
* Validate all type ids.
The validator does not check if the type of an instruction is actually
a type unless the OpCode has a specific requirement. For example,
OpFAdd is checked, but OpUndef is not.
The commit add a generic check that if there is a type id then the id
defines a type.
http://crbug.com/876694
* Merge other checks for type into new one.
There are a couple check that the type id is a type for specific
opcodes. Those have been mereged into 1.
Small changes to other test cases to make them valid enough for the
purpose of the test.
In the specification of `OpTypeFunction`, it says
> OpFunction is the only valid use of OpTypeFunction.
This commit add a check in the validator for this rule.
A test started to fail because the new check happens before the check
the test case is testing. Updated the test case to still fail the
check it was suppose to fail originally.
http://crbug.com/874571
* Copy decorations when creating new ids.
When creating a new value based on an old value, we need to copy the
decorations to the new id. This change does this in 3 places:
1) The variable holding the return value of the function generated by
merge return should get decorations from the function.
2) The results of the OpPhi instructions should get decorations from the
variable they are replacing in the ssa writer.
3) In local access chain convert the intermediate struct (result of
OpCompositeInsert) generated for the store replacement should get its
decorations from the variable being stored to.
Fixes#1787.
If seems like at least 1 driver does not like a condition jump to the end
of a selection construct. We are generating these in the merge return
pass. This change stops merge return from generating this sequence.
Part of #1861.
When doing predicate blocks, we need to traverse every block in
structured order in order to keep track of which construct a block is
contained in. The standard way of traversing code in structured order
is to create a list with all of the nodes in order. However, when
predicating blocks, new blocks are created, and those blocks are missed.
This causes branches that go too far.
The solution is to update the order as new blocks are created. Since
we are using an std::list, we do not have to worry about invalidation of
iterators when changing the list.
* Split constant opcode validation out of idUsage and into
validate_constants.cpp
* minor style fixes
* reduced duplication
* fixed an issue with array sizing
* Refactor PredicateBlocks
Refactor PredicateBlocks so that we know which constructs a return
is contained in. Will be used later.
* Have PredicateBlocks jump the existing merge blocks.
In PredicateBlocks, we currently skip instructions with side effects,
but it still follows the same control flow (sort-of). This causes a
problem, when we are trying to predicate code in a loop. We skip all
of the code with side effects (IV increment), but still follow the
same control flow (jump back the start of the loop). This creates an
infinite loop because the code will keep jumping back to the start of
the loop without changing the values that effect the exit condition.
This is a large change to merge-return. When predicating a block that
is in a loop or merge construct, it will jump to the merge block of the
construct. Once out of all constructs we will generate code as we did
before.
* Handle breaks from structured-ifs in DCE.
dead code elimination assumes that are conditional branches except for
breaks and continues in loops will have an OpSelectionMerge before them.
That is not true when breaking out of a selection construct.
The fix is to look for breaks in selection constructs in the same place
we look for breaks and continues for loops.
When dead-branch-elim folds a conditional branch, it also deletes the
OpSelectionMerge instruction. If that construct contains a
conditional branch to the merge node, it will not have its own
OpSelectionMerge. When the headers merge instruction is deleted, the
the inner conditional branch will no longer be legal. It will be a
selection to a node that is not a merge node.
We fix this up by moving the OpSelectionMerge to a new location if it is
still needed.
This forks the testing harness from https://github.com/google/shaderc
to allow testing CLI tools.
New features needed for SPIRV-Tools include:
1- A new PlaceHolder subclass for spirv shaders. This place holder
calls spirv-as to convert assembly input into SPIRV bytecode. This is
required for most tools in SPIRV-Tools.
2- A minimal testing file for testing basic functionality of spirv-opt.
Add tests for all flags in spirv-opt.
1. Adds tests to check that known flags match the names that each pass
advertises.
2. Adds tests to check that -O, -Os and --legalize-hlsl schedule the
expected passes.
3. Adds more functionality to Expect classes to support regular
expression matching on stderr.
4. Add checks for integer arguments to optimization flags.
5. Fixes#1817 by modifying the parsing of integer arguments in
flags that take them.
6. Fixes -Oconfig file parsing (#1778). It reads every line of the file
into a string and then parses that string by tokenizing every group of
characters between whitespaces (using the standard cin reading
operator). This mimics shell command-line parsing, but it does not
support quoting (and I'm not planning to).
When doing the validator checks, an instruction is currently registered
at the end of IdPass. This creates an inconsistency. In IdPass, an
instruction that uses its own result will treat that use as a forward
reference. Then in the following passes it will not because the
definition can be found.
It seems best to update the state after all of the check have been done
for the current instruction. This makes it consistent for all of the
passes.
This makes a different when trying to verify OpTypeStruct.
Fixes https://crbug.com/874372.
In local-access-chain-convert, we replace loads by load the entire
variable, then doing the extract. The extract will have the same value
as the load. However, if the load has a decoration on it, the
decoration is lost because we do not copy any them to the new id.
This is fixed by rewritting the load into the extract and keeping the
same result id.
This change has the effect that we do not call DCEInst on the loads
because the load is not being deleted, but replaced. This could leave
OpAccessChain instructions around that are not used. This is not a
problem for -O and -Os. They run local_single_*_elim passes and then
dead code elimination. The dce will remove the unused access chains,
and the load elimination passes work even if there are unused access
chains. I have added test to them to ensure they will not loss
opportunities.
Fixes#1787.
The code in source/message was only used in a single set of tests to
format the output results. This CL changes the test to verify the
message instead of all the error values and removes the source/message
code.
* Moved function opcode validation out of idUsage and into new files
* minor style changes
* General opcode checking is in validate_function.cpp
* Execution limitation checking is in
validate_execution_limitations.cpp
* Execution limitations was split into a new pass as it requires other
validation to register those limitations first.
* Changed entry point validation to check storage class of variable
instead of pointer
* added a test
* Moved several checks after opcode validation
* These checks should be able to guarantee individual instructions are
ok
* Updated tests due to reordered checks
* Moved type instruction validation out of validation idUsage into a new
file
* Consolidate type unique pass into new file
* Removed one bad test
* Reworked validation ordering
* Run the validator in the optimization fuzzers.
The optimizers assumes that the input to the optimizer is valid. Since
the fuzzers do not check that the input is valid before passing the
spir-v to the optimizer, we are getting a few errors.
The solution is to run the validator in the optimizer to validate the
input.
For the legalization passes, we need to add an extra option to the
validator to accept certain types of variable pointers, even if the
capability is not given. At the same time, we changed the option
"--legalize-hlsl" to relax the validator in the same way instead of
turning it off.
Fixes#1800
* Refactored duplication of code between OpCopyMemory and
OpCopyMemorySized validation
* Fixed some bugs in OpCopyMemorySized validation
* Replaced asserts with checks
* Added new tests
* Replaced uses in opcode validation of current_function()
* Added non-const accessor to function lookup in ValidationState_t
* Updated a couple bad tests due to check reordering
1.
BUILD.gn: Don't use the extra Chromium clang warnings
Also removes the unused .gn secondary_sources.
2.
Move fuzzers in test/ instead of testing/
This frees up testing/ to be the git subtree of Chromium's src/testing/
that contains test.gni, gtest, gmock and libfuzzer
3.
DEPS: get the whole testing/ subtree of Chromium
4.
BUILD.gn: Simplify the standalone gtest targets
These targets definitions are inspired from ANGLE's and add a variable
that is the path of the googletest directory so that it can be made
overridable in future commits.
6.
BUILD.gn: Add overridable variables for deps dirs
This avoids hardcoded paths to dependencies that make it hard to
integrate SPIRV-Tools in other GN projects.
When validating a FunctionCall we can trigger an assert if we are not
currently within a function body. This CL adds verification that we are
within a function before attempting to add a function call.
Issue 1789.
In the merge return pass, we will split a block, but not update the phi
instructions that reference the block. Since the branch in the original
block is now part of the block with the new id, the phi nodes must be
updated.
This commit will change this.
I have also considered other places where an id of a basic block could
be referenced, and I don't think any of them need to change.
1) Branch and merge instructions: These jump to the start of the
original block, and so we want them to jump to the block that uses the
original id. Nothing needs to change.
2) Names and decorations: I don't think it matters with block keeps the
name, and there are no decorations that apply to basic blocks.
Fixes#1736.
Many of the files have using std::<foo> statements in them, but then the
use of <foo> will be inconsistently std::<foo> or <foo> scattered
through the file. This CL removes all of the using statements and
updates the code to have the required std:: prefix.
This CL removes the two diag() overloads and leaves only the version
which accepts an Instruction. This is safer as we never use the
implicit location from the validation state.
When creating a new phi for a value in the function, merge return will
rewrite all uses of an id that are no longer dominated by its
definition. Uses that are not in a basic block, like OpName or
decorations, are not dominated, but they should not be replaced.
Fixes#1736.
This CL updates the diag() calls in validate_cfg to provide the
associated instruction. This fixes a couple places where we output the
last line of the file instead of the instruction as the disassembly.
This CL changes validate.cpp to use diag providing an explicit
instruction. This changes the result of the function end checks to not
output a disassembly anymore as printing the last line of the module
didn't seem to make sense.
* Combines OpAccessChain, OpInBoundsAccessChain, OpPtrAccessChain and
OpInBoundsPtrAccessChain
* New folding rule to fold add with 0 for integers
* Converts to a bitcast if the result type does not match the operand
type
V
Currently, some instructions will be missing from the list of
ordered_instructions. This will cause issues due to the debug change
which passed the last instruction into subsequent passes.
This CL moves the addition to the ordered list out of the
RegisterInstruction method into AddOrderedInstruction. This method is
called first in ProcessInstruction and the CapabilitiesPass and IdPass
are updated to take an Instruction parameter.
This CL removes the redundant operator name from the error messages in
validate_composites. The operator will be printed on the next line with
the disassembly.
This CL splits the switch in ImagePass into individual validate
functions. The error messages have been updated to drop the
suffix/prefix of the opcode name since it will be displayed in the
disassembly.
This re-implements the -Oconfig=<file> flag to use a new API that takes
a list of command-line flags representing optimization passes.
This moves the processing of flags that create new optimization passes
out of spirv-opt and into the library API. Useful for other tools that
want to incorporate a facility similar to -Oconfig.
The main changes are:
1- Add a new public function Optimizer::RegisterPassesFromFlags. This
takes a vector of strings. Each string is assumed to have the form
'--pass_name[=pass_args]'. It creates and registers into the pass
manager all the passes specified in the vector. Each pass is
validated internally. Failure to create a pass instance causes the
function to return false and a diagnostic is emitted to the
registered message consumer.
2- Re-implements -Oconfig in spirv-opt to use the new API.
Fixes#1731
* Updated folding rules related to vector shuffle to account for the
undef literal value:
* FoldVectorShuffleFeedingShuffle
* FoldVectorShuffleFeedingExtract
* FoldVectorShuffleWithConstants
* These rules would commit memory violations due to treating the undef
literal value as an accessible composite component
Fixes#1727
* If the pass finds any dead branches it can optimize then at the end of
the pass it reorders basic blocks to ensure they satisfy block ordering
requirements
* Added some new tests
* While investigating this issue, found and fixed a non-deterministic
ordering of dominators
* Now the edges used to construct the dominator tree are sorted
according to posorder traversal indices
This CL updates the code to pull a valid instruction for the line number
when outputting a component error in OpVectorShuffle. The error line
isn't the best at this point as it points at the component, but it's
better then a -1 (turning to max<size_t>) that was being output.
The error messages has been updated to better reflect what the error is
attempting to say.
Issue 1719.
With current implementation, the constant manager does not keep around
two constant with the same value but different types when the types
hash to the same value. So when you start looking for that constant you
will get a constant with the wrong type back.
I've made a few changes to the constant manager to fix this. First off,
I have changed the map from constant to ids to be an std::multimap.
This way a single constant can be mapped to mutiple ids each
representing a different type.
Then when asking for an id of a constant, we can search all of the ids
associated with that constant in order to find the one with the correct
type.
When folding an OpVectorShuffle where the first operand is defined by
an OpVectorShuffle, is unused, and is equal to the second, we end up
with an infinite loop. This is because we think we change the
instruction, but it does not actually change. So we keep trying to
folding the same instruction.
This commit fixes up that specific issue. When the operand is unused,
we replace it with Null.
When folding a vector shuffle that feeds another vector shuffle causes
the size of the first operand to change, when other indices have to be
adjusted reletive to the new size.
The function class provides a {Set|Get}Parent call in order to provide
the context to the LoopDescriptor methods. This CL removes the module
from Function and provides the needed context directly to LoopDescriptor
on creation.
This CL removes the context() method from opt::Function. In the places
where the context() was used we can retrieve, or provide, the context in
another fashion.
Currently the IRContext is passed into the Pass::Process method. It is
then up to the individual pass to store the context into the context_
variable. This CL changes the Run method to store the context before
calling Process which no-longer receives the context as a parameter.
- Vulkan 1.0 uses strict layout rules
- Vulkan 1.0 with relaxed-block-layout validator option
enforces all rules except for the relaxation of vector
offset.
- Vulkan 1.1 and later always supports relaxed block layout
Add spot check tests for the relaxed-block-layout scenarios.
Fixes#1697
This CL moves the various validate files into the val/ directory with
the rest of the validation infrastructure. This matches how opt/ is
setup with the passes with the infrastructure.
Other environments do not.
Add tests for OpenGL 4.5 and SPIR-V universal 1.0 to ensure
they still check monotonic layout.
For universal 1.0, we're assuming it otherwise follows Vulkan
rules for block layout.
Fixes#1685
For the instructions which execute after the IdPass check we can provide
the Instruction instead of the spv_parsed_instruction_t. This
Instruction class provides a bit more context (like the source line)
that is not available from spv_parsed_instruction_t.
This CL moves the validation code to the val:: namespace. This makes it
clearer which instance of the Instruction and other classes are being
referred too.
This CL moves the files in opt/ to consistenly be under the opt::
namespace. This frees up the ir:: namespace so it can be used to make a
shared ir represenation.
Currently the utils/ folder uses both spvutils:: and spvtools::utils.
This CL changes the namespace to consistenly be spvtools::utils to match
the rest of the codebase.
Implement rules for row-major matrices
Use ArrayStride and MatrixStride to compute sizes
Propagate matrix stride and RowMajor/ColumnMajor through array members of structs.
Fixes#1637Fixes#1668
Fixes#1618.
Adds a check that validates acceptable exits from case constructs. Case
constructs may only exit to another case construct, the corresponding
merge, an outer loop continue or outer loop merge.
Fixes#1664 : PushConstant with Block follows storage buffer rules
PushConstant variables were being checked with block rules, which are
too strict.
Fixes#1606 : StorageBuffer with Block layout follows buffer rules
StorageBuffer variables were not being checked before.
Fix layout messages: say storage class and decoration
We need to provide more information about storage class and decoration.
The folding routines are currently global functions. They also rely on
data in an std::map that holds the folding rules for each opcode. This
causes that map to not have a clear owner, and therefore never gets
deleted.
There has been a request to delete this map. To implement this, we will
create a InstructionFolder class that owns the maps. The IRContext will
own the InstructionFolder instance. Then the global functions will
become public memeber functions of the InstructionFolder.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1659.
There are a few locations where we need to handle duplicate types. We
cannot merge them because they may be needed for reflection. When this
happens we need do some extra lookups in the type manager.
The specific fixes are:
1) When generating a constant through `GetDefiningInstruction` accept
and use an id for the desired type of the constant. This will make sure
you get the type that is needed.
2) In Private-to-local, make sure we to update the def-use chains when a
new pointer type is created.
3) In the type manager, make sure that `FindPointerToType` returns a
pointer that points to the given type and not a duplicate type.
4) In scalar replacment, make sure the null constants that are created
are the correct type.
- Add asm/dis test for SPV_KHR_8bit_storage
- validator: SPV_KHR_8bit_storage capabilities enable declaration of 8bit int
TODO:
- validator: ban arithmetic on 8bit unless Int8 is enabled
Covered by https://github.com/KhronosGroup/SPIRV-Tools/issues/1595
Produce better error diagnostics in the CFG validation.
This CL fixes up several issues with the diagnostic error line output
in the CFG validation code. For the cases where we can determine a
better line it has been output. For other cases, we removed the
diagnostic line and the error line number from the results.
Fixes#1657
Revert "Don't merge types of resources"
This reverts commit f393b0e480, but leaves
the tests that were added. Added new test. These test are the so that,
if someone tries the same change I made, they will see the test that
they need to handle.
Don't run remove duplicates in -O and -Os
Romve duplicates was run to help reduce compile time when looking for
types in the type manager. I've run compile time test on three sets
of shaders, and the compile time does not seem to change.
It should be safe to remove it.
The Compact IDs pass can corrupt the CFG, but first the CFG has to
be setup. To do this, a test that builds the CFG, then performs the
compact IDs pass, then checks context consistency.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1648
When doing reflection users care about the names of the variable, the
name of the type, and the name of the members. Remove duplicates breaks
this because it removes the names one of the types when merging.
To fix this we have to keep the different types around for each
resource. This commit adds code to remove duplicates to look for the
types uses to describe resources, and make sure they do not get merged.
However, allow merging of a type used in a resource with something not
used in a resource. Was done when the non resource type came second.
This could have a negative effect on compile time, but it was not
expected to be much.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1372.
Fixes#937
Stop std140/430 validation when runtime array is encountered.
Check for standard uniform/storage buffer layout instead of std140/430.
Added validator command line switch to skip block layout checking.
Validate structs decorated as Block/BufferBlock only when they
are used as variable with storage class of uniform or push
constant.
Expose --relax-block-layout to command line.
dneto0 modification:
- Use integer arithmetic instead of floor.
We need this to reduce the test time on Visual Studio debug builds.
AppVeyor times out at 300s for a process.
Artificially lower the local variable limit check for testing purposes.
We don't get much value from using the full 500K+ variables.
Use a limit of 5000 instead.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1632
Add SPV_ENV_WEBGPU_0 for work-in-progress WebGPU.
val: Disallow OpUndef in WebGPU env
Silence unused variable warnings when !defined(SPIRV_EFFCE)
Limit visibility of validate_instruction.cpp's symbols
Only InstructionPass needs to be visible so all other functions are put
in an anonymous namespace inside the libspirv namespace.
Also add a corresponding check for capabilities in the validator.
Update previously existing test cases where an instruction used to fail
assembling because of a version check, but now they succeed because the
instruction is also guarded by a capability. Now it should assemble.
Add tests to ensure that capabilities are checked appropriately.
The explicitly reserved instructions OpImageSparseSampleProj*
now assemble, but they fail validation.
Fixes#1624
[val] Add extra context to error messages.
This CL extends the error messages produced by the validator to output the
disassembly of the errored line.
The validation_id messages have also been updated to print the line number of
the error instead of the word number. Note, the error number is from the start
of the SPIR-V, it does not include any headers printed in the disassembled code.
Fixes#670, #1581
This CL reverts the revert of 'Disallow array-of-arrays with DescriptorSets when
validating." Other changes have been committed which should aleviate the
AppVeryor resource constraints.
This reverts commit f2c93c6e12.
This CL adds validation to disallow using an array-of-arrays when attached to a
DescriptorSet.
Fixes#1522
Validate Ids before DataRules.
The DataRule validators call FindDefs with the assumption that they
definitions being looked at can be found. This may not be true if we
have not validated identifiers first.
This CL flips the IdPass and DataRulesPass to fix this issue.
Fixes#491
* Basic blocks now have a link to the terminator
* Check all case sepecific rules
* Missing check for branching into the middle of a case (#1618)
Fixes#1120
Checks that all static uses of the Input and Output variables are listed
as interfaces in each corresponding entry point declaration.
* Changed validation state to track interface lists
* updated many tests
* Modified validation state to store entry point names
* Combined with interface list and called EntryPointDescription
* Updated uses
* Changed interface validation error messages to output entry point name
in addtion to ID
We replace the std::vector in the Operand class by a new class that does
a small size optimization. This helps improve compile time on Windows.
Tested on three sets of shaders. Trying various values for the small
vector. The optimal value for the operand class was 2. However, for
the Instruction class, using an std::vector was optimal. Size of "0"
means that an std::vector was used.
Instruction size
0 4 8
Operand Size
0 489 544 684
1 593 487
2 469 570
4 473
8 505
This is a single thread run of ~120 shaders. For the multithreaded run
the results were the similar. The basline time was ~62sec. The
optimal configuration was an 2 for the OperandData and an
std::vector for the OperandList with a compile time of ~38sec. Similar
expiriments were done with other sets of shaders. The compile time still
improved, but not as much.
Contributes to https://github.com/KhronosGroup/SPIRV-Tools/issues/1609.
Try to reduce the amount of disk space used by especially by debug builds,
which may be contributing to AppVeyor failures.
Collapses tests in categories:
- validator
- loop optimizations
- dominator analysis
- linker
Contributes to #1615
- Fix tests for basic group operations (e.g. Reduce) to allow for
new capabilities in SPIR-V 1.3 that enable them.
- Refactor operand capability check to avoid code duplication and
to put all checks that don't need table lookup before any table
lookup.
- Test round trip assembly/disassembly support for extension
SPV_NV_viewport_array2
- Test assembly and validation of decoration ViewportRelativeNV
Fixes#1596
Fixes#1281
* New structured cfg check: all non-construct header blocks'
predecessors must come from within the construct
* New function to calculate blocks in a construct
* Fixed a bug in BasicBlock type bitset
Relaxing check to not consider unreachable predecessors
* Fixing broken common uniform elim test
This CL updates the validate_id code to output the name of the object along with
the id number. There were a few instances which already output the name, this
just extends to all of them. Now, the output should say 123[obj] instead of just
123.
Issue #1581
* Disallow array-of-arrays with DescriptorSets when validating.
This CL adds validation to disallow using an array-of-arrays when attached to a
DescriptorSet.
Fixes#1522
By using forward pointers, we are able to define a struct that has a
pointer to itself. This could be directly or indirectly. The current
implementation of the type manager did not handle this case. There are
three changes that are made in this commit inorder to handle this case:
1) Change the handling of OpTypeForwardPointer
The current handling of OpTypeForwardsPointer is broken if there is a
reference to the pointer before the real definition. When build the
type that contain the forward delared pointer, the type manager will ask
for the type for that ID, and will get a nullptr because it does not
exists. This nullptr is not handleded very well.
The change is to keep track of the incomplete types the first time
through all of the types. An incomplete type is a ForwardPointer or any
type that references an incomplete type.
Then we implement a second pass through the incomplete types that will
complete them.
2) Hashing types.
When hashing a type, we want to uses all of the subtypes as part of the
hash. However, with types that reference them selves, this creates an
infinite recursion. To get around this, we keep track of which types
have been seen on the path from the root type. If we have see the
current type already then we can stop the recursion.
3) Comparing types.
In order to check if two types are the same, we must check that all of
their subtypes are the same as well. This also causes an infinit
recursion. The solution is to stop comparing the subtypes if we are
trying to compare two pointer types that we are already in the middle of
comparing. The ideas is that if the two pointer are different, then in
progress compare will return false itself.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1578.
We add a new rule to the folding rules to fold an FMix feeding an
extract when the alpha value for the element being extracted is either
0 or 1. In those case, we can simple extract from one of the operands
to the FMix.
With that change the simplification pass completely subsumes the
insert-extract elimination pass. So we remove the insert-extract
elimination passes and replce them with calls to the simplification
pass.
In a follow up PR, we should delete the insert-extract elimination pass.
Contributes to https://github.com/KhronosGroup/SPIRV-Tools/issues/1570.
According to the SPIR-V Spec, section 2.4 Logical Layout of a Module there
should be a single required OpMemoryModel instruction provided. This CL adds
validation that OpMemoryModel is provided to the SPIR-V validator.
Fixes#1207
Removes the limit on scalar replacement for the lagalization passes.
This is done by adding an option to the pass (and command line option)
to set the limit on maximum size of the composite that scalar
replacement is willing to divide.
Fixes#1494.
ADCE does not treat OpCopyMemory as an instruction that references
memory. Because of that stores are removed that should not be.
This change teaches ADCE that OpCopyMemory and OpCopyMemorySize both
loads from and stores to memory. This will keep other stores live when
needed, and will allows ADCE to remove OpCopyMemory instructions as
well.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1556.
SPV_EXT_shader_viewport_index_layer enables using ViewportIndex
and Layer in vertex and tessellation shaders.
Also, as per the Vulkan spec:
> The ViewportIndex decoration must be used only within vertex,
> tessellation evaluation, geometry, and fragment shaders.
> In a vertex, tessellation evaluation, or geometry shader, any
> variable decorated with ViewportIndex must be declared using
> the Output storage class.
> In a fragment shader, any variable decorated with ViewportIndex
> must be declared using the Input storage class.
Similarly for Layer.
Currently in scalar replacement, we create a new variable for every
memeber of the composite being divided. It is often overkill, because
not all of those members will be used. This change will check which
elements are used and only create variable for the members that are
used.
This reduces the compile time for one set of shader from 248s to 165s.
Part of https://github.com/KhronosGroup/SPIRV-Tools/issues/1494.
The code patterns generated by DXC around function calls can cause many
store to be storing the same value that was just loaded from the same
location:
```
%10 = OpLoad %type %var
OpStore %var %10
```
We want to clean these up very early on because they can cause other
transformations to do a lot of work. For the cases I see, they can be
removed during local-single-block-elim.
For one set of shaders the compile time goes from 248s to 182s. A 26%
improvement.
Part of https://github.com/KhronosGroup/SPIRV-Tools/issues/1494.
We have already disabled common uniform elimination because it created
sequences of loads an entire uniform object, then we extract just a
single element. This caused problems in some drivers, and is just
generally slow because it loads more memory than needed.
However, there are other way to get into this situation, so I've added
a pass that looks specifically for this pattern and removes it when only
a portion of the load is used.
Fixes#1547.
An FClamp instruction forces a values to be within a certain interval.
When the upper or lower bound of the FClamp is a constant and the value
being compared with is a constant, then in some case we can fold the
compared because the entire range is say less than the value.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1549.
If there is a shader with a variable in the workgroup storage class that
is stored to, but not loadeds, then we know nothing will read those
loads. It should be safe to remove them.
This is implemented in ADCE by treating workgroup variables the same
way that private variables are treated.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1550.
When doing if-conversion, we do not currently move code out of the side
nodes. The reason for this is that it can increase the number of
instructions that get executed because both side nods will have to be
executed now.
In this commit, we add code to move an instruction, and all of the
instructions it depends on, out of a side node and into the header of
the selection construct. However to keep the cost down, we only do it
when the two values in the OpPhi node compute the same value. This way
we have to move only one of the instructions and the other becomes
unused most of the time. So no real extra cost.
Makes the value number table an alalysis in the ir context.
Added more opcodes to list of code motion safe opcodes.
Fixes#1526.
Previously, the loop class used the terms latch and continue block
interchangeably. This patch splits the two and corrects and tests some
uses of the old uses of GetLatchBlock.
This pass will look for adjacent loops that are compatible and legal to
be fused.
Loops are compatible if:
- they both have one induction variable
- they have the same upper and lower bounds
- same initial value
- same condition
- they have the same update step
- they are adjacent
- there are no break/continue in either of them
Fusion is legal if:
- fused loops do not have any dependencies with dependence distance
greater than 0 that did not exist in the original loops.
- there are no function calls in the loops (could have side-effects)
- there are no barriers in the loops
It will fuse all such loops as long as the number of registers used for
the fused loop stays under the threshold defined by
max_registers_per_loop.
Adds support for spliting loops whose register pressure exceeds a user
provided level. This pass will split a loop into two or more loops given
that the loop is a top level loop and that spliting the loop is legal.
Control flow is left intact for dead code elimination to remove.
This pass is enabled with the --loop-fission flag to spirv-opt.
Track live scalars in VDCE as if they were single element vectors.
Handle the extended instructions for GLSL in VDCE.
Handle composite construct instructions in VDCE.
If one of the operands to an OpVectorTimesScalar instruction is zero,
then the result will be the 0 vector. Currently we do not fold the
insturction unless both operands are constants. This change fixes that.
We also allow folding of OpPhi instructions where the incoming values
are either an OpUndef or the OpPhi instruction itself. As with other
cases, this can be simplified to the OpUndef.
Track live scalars in VDCE as if they were single element vectors.
Handle the extended instructions for GLSL in VDCE.
Handle composite construct instructions in VDCE.
Fixes#1511.
Eliminate unused store to variable if followed by store to same
variable in same block.
Most significantly, this cleans up stores made unused by this pass.
These useless stores can inhibit subsequent optimizations, specifically
LocalSingleStoreElim. Eliminating them makes subsequent optimization more
effective.
The main effect of this pass is to simplify the work done by the SSA
rewriter. It catches many local loads/stores that help speeding up the
work done by the main rewriter.
Introduce a pass that does a DCE type analysis for vector elements
instead of the whole vector as a single element.
It will then rewrite instructions that are not used with something else.
For example, an instruction whose value are not used, even though it is
referenced, is replaced with an OpUndef.
For each function, the analysis determine which SSA registers are live
at the beginning of each basic block and which one are killed at
the end of the basic block.
It also includes utilities to simulate the register pressure for loop
fusion and fission.
The implementation is based on the paper "A non-iterative data-flow
algorithm for computing liveness sets in strict ssa programs" from
Boissinot et al.
* Adds new pass for validating non-uniform group instructions
* Currently on checks execution scope for Vulkan 1.1 and SPIR-V 1.3
* Added test framework
The local-single-store-elim algorithm is not fundamentally bad.
However, when there are a large number of variables, some of the
maps that are used can become very large. These large data structures
then take a very long time to be destroyed. I've seen cases around 40%
if the time.
I've rewritten that algorithm to not use as much memory. This give a
significant improvement when running a large number of shader through
DXC.
I've also made a small change to local-single-block-elim to delete the
loads that is has replaced. That way local-single-store-elim will not
have to look at those. local-single-store-elim now does the same thing.
The time for one set goes from 309s down to 126s. For another set, the
time goes from 102s down to 88s.
GCD MIV test as described in Chapter 3 of "Optimizing Compilers for
Modern Architectures: A Dependence-Based Approach" by Randy Allen, and
Ken Kennedy.
Delta test as described in Figure 3 of "Practical Dependence Testing" by
Gina Goff, Ken Kennedy, and Chau-Wen Tseng from PLDI '91.
* Reworked how execution model limitations are checked
* Now OpFunction checks which entry points call it and checks its
registered limitations instead of building a call stack in the entry
point
* New tests
* Moving function to entry point mapping into VState
Relaxs checks for per-vertex builtin variables. If the builtin
decoration is applied to a variable, then those checks now allow a level
of arraying on the variable before checking the type consistency.
* Allows arrays of variables to be present for the per-vertex variables:
* Position
* PointSize
* ClipDistance
* CullDistance
* Updated tests
Add test for case where OpBranch branches to a value (a function value).
Previous tests only checked a label value (name of a block.).
Update validate_id.cpp to remove the TODO for OpBranch and say that it
is already checked in validate_cfg.cpp
The unordered_set in ADCE that holds all of the live instructions takes
a very long time to be destroyed. In some shaders, it takes over 40% of
the time.
If we look at the unique ids of the live instructions, I believe they
are dense enough make a simple bit vector a good choice for to hold that
data. When I check the density of the bit vector for larger shaders, we
are usually using less than 4 bytes per element in the vector, and
almost always less than 16.
So, in this commit, I introduce a simple bit vector class, and
use it in ADCE.
This help improve the compile time for some shaders on windows by the
40% mentioned above.
Contributes to https://github.com/KhronosGroup/SPIRV-Tools/issues/1328.
For each loop in a function, the pass walks the loops from inner to outer most loop
and tries to peel loop for which a certain amount of iteration can be done before or after the loop.
To limit code growth, peeling will not happen if the growth in code size goes above a configurable threshold.
Provides functionality to perform ZIV and SIV dependency analysis tests
between a load and store within the same loop.
Dependency tests rely on scalar analysis to prove and disprove dependencies
with regard to the loop being analysed.
Based on the 1990 paper Practical Dependence Testing by Goff, Kennedy, Tseng
Adds support for marking loops in the loop nest as IRRELEVANT.
Loops are marked IRRELEVANT if the analysed instructions contain
no induction variables for the loops, i.e. the loops induction
variable is not relevent to the dependence of the store and load.
Adding three rules to fold OpDot (implemented as two).
- When an OpDot has two constants, then fold to the resulting const.
- When one of the inputs is the 0 vector, then fold to zero.
- When one of the inputs is a single 1 with 0s, then rewrite to an
OpCompositeExtract of the appropriate element. This will help find
even more folding opportunities.
Contributes to #709.
According to Vulkan spec 1.1.72:
> The PrimitiveId decoration must be used only within fragment,
> tessellation control, tessellation evaluation, and geometry shaders.
> In a tessellation control or tessellation evaluation shader, any
> variable decorated with PrimitiveId must be declared using the Input
> storage class.
We were enforcing that PrimitiveId can only be used with Output
storage class for TCS and TES before.
From the test case, the slice of the CFG that is interesting for the bug
is
25
|
v
30
|
v
31<-+
| |
v |
34--+
1. In block 25, we have a Phi candidate for %f with arguments
%47 = Phi[%float_0, %0]. This merges %float_0 and a yet unknown
argument from the external loop backedge.
2. We are now processing block 34:
i. The load %35 = OpLoad %f triggers a Phi candidate to be placed in
block 31.
ii. The Phi candidate %50 = Phi needs two arguments. The one coming
from block 30 is %47. But the one coming from block 34 (which we
are now processing and have marked sealed), finds %50 itself as
the reaching def for %f.
3. This wrongfully marks %50 as a copy-of Phi, which ultimately makes
both %47 and %50 copy-of Phis that get eliminated.
Update grammar table generation:
- Get extensions from instructions, not just operand-kinds
- Don't explicitly list extensions that come from the SPIR-V core
grammar or from a KHR extended instruction set grammar.
This makes it easier to support new extensions since the recommended
extension strategy is to add instructions to the core grammar file.
Also, test the validator has trivial support for passing through
the extensions SPV_NV_shader_subgroup_partitioned and
SPV_EXT_descriptor_indexing.
Migrating to unified grammar means we sometimes have two fields
for a certain feature: version and extensions. It means the feature
in question can be used either in SPIR-V of advanced-enough
versions or in any SPIR-V with with the specified extensions.
Validator now respects the above rules.
At every definition of a builtin id, run at-reference-check rules on the
defining instruction as well.
Previosly the validation was missing the case when invalid storage class
was defined in the instruction which defines the built-in, and not in
the instruction which references the built-in.
Refactored validate built-ins to make
GetExecutionModels(entry_point)
and
GetExecutionModes(entry_point)
available in validation state.
Entry points are allowed to have multiple execution modes and execution
models.
Finished the last missing feature in Vulkan built-ins validation:
FragDepth requires DepthReplacing.
Currently OpImageTexelPointer operations are treat like a use of the
pointer, but it does
not look for the memory being referenced to make sure stores are not
removed.
This change teaches it so identify the memory being accessed, and
treats it as if that memory is loaded.
Fixes to #1445.
OpImageTexelPointer acts like a special kind of load. It is not an
array load, but it also cannot be removed the same way a regular
load can. The type of propagation that needs to be done is similar
to what we do for arrays, so I want to merge that code into that
optmization.
Contributers to #1445.
OpImageTexelPointer acts like a special kind of load. It is still
safe to change the storage class of a variable used in a
OpImageTexalPointer instruction.
Contributes to #1445.
CPPreference.com has this description of digits10:
“The value of std::numeric_limits<T>::digits10 is the number of
base-10 digits that can be represented by the type T without change,
that is, any number with this many significant decimal digits can be
converted to a value of type T and back to decimal form, without
change due to rounding or overflow.”
This means that any number with this many digits can be represented
accurately in the corresponding type. A change in any digit in a
number after that may or may not cause it a different bitwise
representation. Therefore this isn’t necessarily enough precision to
accurately represent the value in text. Instead we need max_digits10
which has the following description:
“The value of std::numeric_limits<T>::max_digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text.”
The patch includes a test case in hex_float_test which tries to do a
round-robin conversion of a number that requires more than 6 decimal
places to be accurately represented. This would fail without the
patch.
Sadly this also breaks a bunch of other tests. Some of the tests in
hex_float_test use ldexp and then compare it with a value which is not
the same as the one returned by ldexp but instead is the value rounded
to 6 decimals. Others use values that are not evenly representable as
a binary floating fraction but then happened to generate the same
value when rounded to 6 decimals. Where the actual value didn’t seem
to matter these have been changed with different values that can be
represented as a binary fraction.