For each loop in a function, the pass walks the loops from inner to outer most loop
and tries to peel loop for which a certain amount of iteration can be done before or after the loop.
To limit code growth, peeling will not happen if the growth in code size goes above a configurable threshold.
Provides functionality to perform ZIV and SIV dependency analysis tests
between a load and store within the same loop.
Dependency tests rely on scalar analysis to prove and disprove dependencies
with regard to the loop being analysed.
Based on the 1990 paper Practical Dependence Testing by Goff, Kennedy, Tseng
Adds support for marking loops in the loop nest as IRRELEVANT.
Loops are marked IRRELEVANT if the analysed instructions contain
no induction variables for the loops, i.e. the loops induction
variable is not relevent to the dependence of the store and load.
Adding three rules to fold OpDot (implemented as two).
- When an OpDot has two constants, then fold to the resulting const.
- When one of the inputs is the 0 vector, then fold to zero.
- When one of the inputs is a single 1 with 0s, then rewrite to an
OpCompositeExtract of the appropriate element. This will help find
even more folding opportunities.
Contributes to #709.
According to Vulkan spec 1.1.72:
> The PrimitiveId decoration must be used only within fragment,
> tessellation control, tessellation evaluation, and geometry shaders.
> In a tessellation control or tessellation evaluation shader, any
> variable decorated with PrimitiveId must be declared using the Input
> storage class.
We were enforcing that PrimitiveId can only be used with Output
storage class for TCS and TES before.
From the test case, the slice of the CFG that is interesting for the bug
is
25
|
v
30
|
v
31<-+
| |
v |
34--+
1. In block 25, we have a Phi candidate for %f with arguments
%47 = Phi[%float_0, %0]. This merges %float_0 and a yet unknown
argument from the external loop backedge.
2. We are now processing block 34:
i. The load %35 = OpLoad %f triggers a Phi candidate to be placed in
block 31.
ii. The Phi candidate %50 = Phi needs two arguments. The one coming
from block 30 is %47. But the one coming from block 34 (which we
are now processing and have marked sealed), finds %50 itself as
the reaching def for %f.
3. This wrongfully marks %50 as a copy-of Phi, which ultimately makes
both %47 and %50 copy-of Phis that get eliminated.
Update grammar table generation:
- Get extensions from instructions, not just operand-kinds
- Don't explicitly list extensions that come from the SPIR-V core
grammar or from a KHR extended instruction set grammar.
This makes it easier to support new extensions since the recommended
extension strategy is to add instructions to the core grammar file.
Also, test the validator has trivial support for passing through
the extensions SPV_NV_shader_subgroup_partitioned and
SPV_EXT_descriptor_indexing.
Migrating to unified grammar means we sometimes have two fields
for a certain feature: version and extensions. It means the feature
in question can be used either in SPIR-V of advanced-enough
versions or in any SPIR-V with with the specified extensions.
Validator now respects the above rules.
At every definition of a builtin id, run at-reference-check rules on the
defining instruction as well.
Previosly the validation was missing the case when invalid storage class
was defined in the instruction which defines the built-in, and not in
the instruction which references the built-in.
Refactored validate built-ins to make
GetExecutionModels(entry_point)
and
GetExecutionModes(entry_point)
available in validation state.
Entry points are allowed to have multiple execution modes and execution
models.
Finished the last missing feature in Vulkan built-ins validation:
FragDepth requires DepthReplacing.
Currently OpImageTexelPointer operations are treat like a use of the
pointer, but it does
not look for the memory being referenced to make sure stores are not
removed.
This change teaches it so identify the memory being accessed, and
treats it as if that memory is loaded.
Fixes to #1445.
OpImageTexelPointer acts like a special kind of load. It is not an
array load, but it also cannot be removed the same way a regular
load can. The type of propagation that needs to be done is similar
to what we do for arrays, so I want to merge that code into that
optmization.
Contributers to #1445.
OpImageTexelPointer acts like a special kind of load. It is still
safe to change the storage class of a variable used in a
OpImageTexalPointer instruction.
Contributes to #1445.
CPPreference.com has this description of digits10:
“The value of std::numeric_limits<T>::digits10 is the number of
base-10 digits that can be represented by the type T without change,
that is, any number with this many significant decimal digits can be
converted to a value of type T and back to decimal form, without
change due to rounding or overflow.”
This means that any number with this many digits can be represented
accurately in the corresponding type. A change in any digit in a
number after that may or may not cause it a different bitwise
representation. Therefore this isn’t necessarily enough precision to
accurately represent the value in text. Instead we need max_digits10
which has the following description:
“The value of std::numeric_limits<T>::max_digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text.”
The patch includes a test case in hex_float_test which tries to do a
round-robin conversion of a number that requires more than 6 decimal
places to be accurately represented. This would fail without the
patch.
Sadly this also breaks a bunch of other tests. Some of the tests in
hex_float_test use ldexp and then compare it with a value which is not
the same as the one returned by ldexp but instead is the value rounded
to 6 decimals. Others use values that are not evenly representable as
a binary floating fraction but then happened to generate the same
value when rounded to 6 decimals. Where the actual value didn’t seem
to matter these have been changed with different values that can be
represented as a binary fraction.
When the original code copies an entire array or struct one element at a
time, this turns into a series of OpCompositeInsert instruction followed
by a store of the whole array. We currently miss opportunities in copy
propagate arrays because we do not recognize this as a copy.
This commit adds code to copy propagate arrays to identify this code
pattern.
Also updates the performance passed to run array copy propagation.
The first implementation of MemroyObject, which is used in copy
propagate arrays, forced the access chain to be like the access chains
in OpCompositeExtract. This excluded the possibility of the memory
object from representing an array element that was extracted with a
variable index. Looking at the code, that restriction is not
neccessary. I also see some opportunities for doing this in some real
shaders.
Contributes to #1430.
This patch adds support for the analysis of scalars in loops. It works
by traversing the defuse chain to build a DAG of scalar operations and
then simplifies the DAG by folding constants and grouping like terms.
It represents induction variables as recurrent expressions with respect
to a given loop and can simplify DAGs containing recurrent expression by
rewritting the entire DAG to be a recurrent expression with respect to
the same loop.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1427
Adjusting validation to the new rule:
"Before version 1.3, it is only valid to use this instruction with
TessellationControl, GLCompute, or Kernel execution models.
There is no such restriction starting with version 1.3."
Also fixed wrong version numbers in source/spirv_target_env.cpp.
When we change the type of an object that gets stored, we do not want to
change the type of the memory location being stored to. In order to
still be able to do the rewrite, we will decompose and rebuild the
object so it is the type that can be stored.
Fixes#1416.
The sprir-v generated from HLSL code contain many copyies of very large
arrays. Not only are these time consumming, but they also cause
problems for drivers because they require too much space.
To work around this, we will implement an array copy propagation. Note
that we will not implement a complete array data flow analysis in order
to implement this. We will be looking for very simple cases:
1) The source must never be stored to.
2) The target must be stored to exactly once.
3) The store to the target must be a store to the entire array, and be a
copy of the entire source.
4) All loads of the target must be dominated by the store.
The hard part is keeping all of the types correct. We do not want to
have to do too large a search to update everything, which may not be
possible, do we give up if we see any instruction that might be hard to
update.
Also in types.h, the element decorations are not stored in an std::map.
This change was done so the hashing algorithm for a Struct is
consistent. With the std::unordered_map, the traversal order was
non-deterministic leading to the same type getting hashed to different
values. See |Struct::GetExtraHashWords|.
Contributes to #1416.
Added a framework for validation of BuiltIn variables. The framework
allows implementation of flexible abstract rules which are required for
built-ins as the information (decoration, definition, reference) is not
in one place, but is scattered all over the module.
Validation rules are implemented as a map
id -> list<functor(instrution)>
Ids which are dependent on built-in types or objects receive a task
list, such as "this id cannot be referenced from function which is
called from entry point with execution model X; propagate this rule
to your descendants in the global scope".
Also refactored test/val/val_fixtures.
All built-ins covered by tests
This patch adds a new option --time-report to spirv-opt. For each pass
executed by spirv-opt, the flag prints resource utilization for the pass
(CPU time, wall time, RSS and page faults)
This fixes issue #1378
This pass replaces the load/store elimination passes. It implements the
SSA re-writing algorithm proposed in
Simple and Efficient Construction of Static Single Assignment Form.
Braun M., Buchwald S., Hack S., Leißa R., Mallon C., Zwinkau A. (2013)
In: Jhala R., De Bosschere K. (eds)
Compiler Construction. CC 2013.
Lecture Notes in Computer Science, vol 7791.
Springer, Berlin, Heidelberg
https://link.springer.com/chapter/10.1007/978-3-642-37051-9_6
In contrast to common eager algorithms based on dominance and dominance
frontier information, this algorithm works backwards from load operations.
When a target variable is loaded, it queries the variable's reaching
definition. If the reaching definition is unknown at the current location,
it searches backwards in the CFG, inserting Phi instructions at join points
in the CFG along the way until it finds the desired store instruction.
The algorithm avoids repeated lookups using memoization.
For reducible CFGs, which are a superset of the structured CFGs in SPIRV,
this algorithm is proven to produce minimal SSA. That is, it inserts the
minimal number of Phi instructions required to ensure the SSA property, but
some Phi instructions may be dead
(https://en.wikipedia.org/wiki/Static_single_assignment_form).
The loop peeler util takes a loop as input and create a new one before.
The iterator of the duplicated loop then set to accommodate the number
of iteration required for the peeling.
The loop peeling pass that decided to do the peeling and profitability
analysis is left for a follow-up PR.
We are seeing shaders that have multiple returns in a functions. These
functions must get inlined for legalization purposes; however, the
inliner does not know how to inline functions that have multiple
returns.
The solution we will go with it to improve the merge return pass to
handle structured control flow.
Note that the merge return pass will assume the cfg has been cleanedup
by dead branch elimination.
Fixes#857.
Previously we keep a separate static grammar table for opcodes/
operands per SPIR-V version. This commit changes that to use a
single unified static grammar table for opcodes/operands.
This essentially changes how grammar facts are queried against
a certain target environment. There are only limited filtering
according to the desired target environment; a symbol is
considered as available as long as:
1. The target environment satisfies the minimal requirement of
the symbol; or
2. There is at least one extension enabling this symbol.
Note that the second rule assumes the extension enabling the
symbol is indeed requested in the SPIR-V code; checking that
should be the validator's work.
Also fixed a few grammar related issues:
* Rounding mode capability requirements are moved to client APIs.
* Reserved symbols not available in any extension is no longer
recognized by assembler.
Strips reflection info. This is limited to decorations and
decoration instructions related to the SPV_GOOGLE_hlsl_functionality1
extension.
It will remove the OpExtension for SPV_GOOGLE_hlsl_functionality1.
It will also remove the OpExtension for SPV_GOOGLE_decorate_string
if there are no further remaining uses of OpDecorateStringGOOGLE.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1398
This reimplementation fixes several issues when removing decorations associated
to an ID (partially addresses #1174 and gives tools for fixing #898), as well
as making it easier to remove groups; a few additional tests have been added.
DecorationManager::RemoveDecoration() will still not delete dead decorations it
created, but I do not think it is its job either; given the following input
```
OpCapability Shader
OpCapability Linkage
OpMemoryModel Logical GLSL450
OpDecorate %2 Restrict
%2 = OpDecorationGroup
OpGroupDecorate %2 %1 %3
OpDecorate %4 Invariant
%4 = OpDecorationGroup
OpGroupDecorate %4 %2
%uint = OpTypeInt 32 0
%1 = OpVariable %uint Uniform
%3 = OpVariable %uint Uniform
```
which of the following two outputs would you expect RemoveDecoration(2) to produce:
```
OpCapability Shader
OpCapability Linkage
OpMemoryModel Logical GLSL450
%uint = OpTypeInt 32 0
%1 = OpVariable %uint Uniform
%3 = OpVariable %uint Uniform
```
or
```
OpCapability Shader
OpCapability Linkage
OpMemoryModel Logical GLSL450
OpDecorate %4 Invariant
%4 = OpDecorationGroup
%uint = OpTypeInt 32 0
%1 = OpVariable %uint Uniform
%3 = OpVariable %uint Uniform
```
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/924
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1174
The default target is SPIR-V 1.3.
For example, spirv-as will generate a SPIR-V 1.3 binary by default.
Use command line option "--target-env spv1.0" if you want to make a SPIR-V
1.0 binary or validate against SPIR-V 1.0 rules.
Example:
# Generate a SPIR-V 1.0 binary instead of SPIR-V 1.3
spirv-as --target-env spv1.0 a.spvasm -o a.spv
spirv-as --target-env vulkan1.0 a.spvasm -o a.spv
# Validate as SPIR-V 1.0.
spirv-val --target-env spv1.0 a.spv
# Validate as Vulkan 1.0
spirv-val --target-env vulkan1.0 a.spv