Eliminate unused store to variable if followed by store to same
variable in same block.
Most significantly, this cleans up stores made unused by this pass.
These useless stores can inhibit subsequent optimizations, specifically
LocalSingleStoreElim. Eliminating them makes subsequent optimization more
effective.
The main effect of this pass is to simplify the work done by the SSA
rewriter. It catches many local loads/stores that help speeding up the
work done by the main rewriter.
Introduce a pass that does a DCE type analysis for vector elements
instead of the whole vector as a single element.
It will then rewrite instructions that are not used with something else.
For example, an instruction whose value are not used, even though it is
referenced, is replaced with an OpUndef.
For each function, the analysis determine which SSA registers are live
at the beginning of each basic block and which one are killed at
the end of the basic block.
It also includes utilities to simulate the register pressure for loop
fusion and fission.
The implementation is based on the paper "A non-iterative data-flow
algorithm for computing liveness sets in strict ssa programs" from
Boissinot et al.
* Adds new pass for validating non-uniform group instructions
* Currently on checks execution scope for Vulkan 1.1 and SPIR-V 1.3
* Added test framework
The local-single-store-elim algorithm is not fundamentally bad.
However, when there are a large number of variables, some of the
maps that are used can become very large. These large data structures
then take a very long time to be destroyed. I've seen cases around 40%
if the time.
I've rewritten that algorithm to not use as much memory. This give a
significant improvement when running a large number of shader through
DXC.
I've also made a small change to local-single-block-elim to delete the
loads that is has replaced. That way local-single-store-elim will not
have to look at those. local-single-store-elim now does the same thing.
The time for one set goes from 309s down to 126s. For another set, the
time goes from 102s down to 88s.
GCD MIV test as described in Chapter 3 of "Optimizing Compilers for
Modern Architectures: A Dependence-Based Approach" by Randy Allen, and
Ken Kennedy.
Delta test as described in Figure 3 of "Practical Dependence Testing" by
Gina Goff, Ken Kennedy, and Chau-Wen Tseng from PLDI '91.
* Reworked how execution model limitations are checked
* Now OpFunction checks which entry points call it and checks its
registered limitations instead of building a call stack in the entry
point
* New tests
* Moving function to entry point mapping into VState
Relaxs checks for per-vertex builtin variables. If the builtin
decoration is applied to a variable, then those checks now allow a level
of arraying on the variable before checking the type consistency.
* Allows arrays of variables to be present for the per-vertex variables:
* Position
* PointSize
* ClipDistance
* CullDistance
* Updated tests
Add test for case where OpBranch branches to a value (a function value).
Previous tests only checked a label value (name of a block.).
Update validate_id.cpp to remove the TODO for OpBranch and say that it
is already checked in validate_cfg.cpp
The unordered_set in ADCE that holds all of the live instructions takes
a very long time to be destroyed. In some shaders, it takes over 40% of
the time.
If we look at the unique ids of the live instructions, I believe they
are dense enough make a simple bit vector a good choice for to hold that
data. When I check the density of the bit vector for larger shaders, we
are usually using less than 4 bytes per element in the vector, and
almost always less than 16.
So, in this commit, I introduce a simple bit vector class, and
use it in ADCE.
This help improve the compile time for some shaders on windows by the
40% mentioned above.
Contributes to https://github.com/KhronosGroup/SPIRV-Tools/issues/1328.
For each loop in a function, the pass walks the loops from inner to outer most loop
and tries to peel loop for which a certain amount of iteration can be done before or after the loop.
To limit code growth, peeling will not happen if the growth in code size goes above a configurable threshold.
Provides functionality to perform ZIV and SIV dependency analysis tests
between a load and store within the same loop.
Dependency tests rely on scalar analysis to prove and disprove dependencies
with regard to the loop being analysed.
Based on the 1990 paper Practical Dependence Testing by Goff, Kennedy, Tseng
Adds support for marking loops in the loop nest as IRRELEVANT.
Loops are marked IRRELEVANT if the analysed instructions contain
no induction variables for the loops, i.e. the loops induction
variable is not relevent to the dependence of the store and load.
Adding three rules to fold OpDot (implemented as two).
- When an OpDot has two constants, then fold to the resulting const.
- When one of the inputs is the 0 vector, then fold to zero.
- When one of the inputs is a single 1 with 0s, then rewrite to an
OpCompositeExtract of the appropriate element. This will help find
even more folding opportunities.
Contributes to #709.
According to Vulkan spec 1.1.72:
> The PrimitiveId decoration must be used only within fragment,
> tessellation control, tessellation evaluation, and geometry shaders.
> In a tessellation control or tessellation evaluation shader, any
> variable decorated with PrimitiveId must be declared using the Input
> storage class.
We were enforcing that PrimitiveId can only be used with Output
storage class for TCS and TES before.
From the test case, the slice of the CFG that is interesting for the bug
is
25
|
v
30
|
v
31<-+
| |
v |
34--+
1. In block 25, we have a Phi candidate for %f with arguments
%47 = Phi[%float_0, %0]. This merges %float_0 and a yet unknown
argument from the external loop backedge.
2. We are now processing block 34:
i. The load %35 = OpLoad %f triggers a Phi candidate to be placed in
block 31.
ii. The Phi candidate %50 = Phi needs two arguments. The one coming
from block 30 is %47. But the one coming from block 34 (which we
are now processing and have marked sealed), finds %50 itself as
the reaching def for %f.
3. This wrongfully marks %50 as a copy-of Phi, which ultimately makes
both %47 and %50 copy-of Phis that get eliminated.
Migrating to unified grammar means we sometimes have two fields
for a certain feature: version and extensions. It means the feature
in question can be used either in SPIR-V of advanced-enough
versions or in any SPIR-V with with the specified extensions.
Validator now respects the above rules.
At every definition of a builtin id, run at-reference-check rules on the
defining instruction as well.
Previosly the validation was missing the case when invalid storage class
was defined in the instruction which defines the built-in, and not in
the instruction which references the built-in.
Refactored validate built-ins to make
GetExecutionModels(entry_point)
and
GetExecutionModes(entry_point)
available in validation state.
Entry points are allowed to have multiple execution modes and execution
models.
Finished the last missing feature in Vulkan built-ins validation:
FragDepth requires DepthReplacing.
Currently OpImageTexelPointer operations are treat like a use of the
pointer, but it does
not look for the memory being referenced to make sure stores are not
removed.
This change teaches it so identify the memory being accessed, and
treats it as if that memory is loaded.
Fixes to #1445.
OpImageTexelPointer acts like a special kind of load. It is not an
array load, but it also cannot be removed the same way a regular
load can. The type of propagation that needs to be done is similar
to what we do for arrays, so I want to merge that code into that
optmization.
Contributers to #1445.
OpImageTexelPointer acts like a special kind of load. It is still
safe to change the storage class of a variable used in a
OpImageTexalPointer instruction.
Contributes to #1445.
CPPreference.com has this description of digits10:
“The value of std::numeric_limits<T>::digits10 is the number of
base-10 digits that can be represented by the type T without change,
that is, any number with this many significant decimal digits can be
converted to a value of type T and back to decimal form, without
change due to rounding or overflow.”
This means that any number with this many digits can be represented
accurately in the corresponding type. A change in any digit in a
number after that may or may not cause it a different bitwise
representation. Therefore this isn’t necessarily enough precision to
accurately represent the value in text. Instead we need max_digits10
which has the following description:
“The value of std::numeric_limits<T>::max_digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text.”
The patch includes a test case in hex_float_test which tries to do a
round-robin conversion of a number that requires more than 6 decimal
places to be accurately represented. This would fail without the
patch.
Sadly this also breaks a bunch of other tests. Some of the tests in
hex_float_test use ldexp and then compare it with a value which is not
the same as the one returned by ldexp but instead is the value rounded
to 6 decimals. Others use values that are not evenly representable as
a binary floating fraction but then happened to generate the same
value when rounded to 6 decimals. Where the actual value didn’t seem
to matter these have been changed with different values that can be
represented as a binary fraction.
When building C code with gcc and the
-Wstrict-prototypes option, function declarations
and definitions that don't specify their argument
types generate warnings. Functions that don't
take parameters need to specify (void) as their
parameter list, rather than leaving it empty.
Note this only applies to C, so only the functions
exported in C-compatible headers need fixing. In
C++ functions can't be declared/defined without a
parameter list, so C++ can safely allow an empty
parameter list to imply (void).
Previously we use symbols in spv_target_env as the minimum version
requirements for features. That makes version check implicitly
relies on the order of entries in the spv_target_env enum, which
also contains client APIs. Instead, we should use the standard
scheme for constructing SPIR-V version; and by doing that we can
also map client API entries to universial SPIR-V versions.
When the original code copies an entire array or struct one element at a
time, this turns into a series of OpCompositeInsert instruction followed
by a store of the whole array. We currently miss opportunities in copy
propagate arrays because we do not recognize this as a copy.
This commit adds code to copy propagate arrays to identify this code
pattern.
Also updates the performance passed to run array copy propagation.
The first implementation of MemroyObject, which is used in copy
propagate arrays, forced the access chain to be like the access chains
in OpCompositeExtract. This excluded the possibility of the memory
object from representing an array element that was extracted with a
variable index. Looking at the code, that restriction is not
neccessary. I also see some opportunities for doing this in some real
shaders.
Contributes to #1430.
This patch adds support for the analysis of scalars in loops. It works
by traversing the defuse chain to build a DAG of scalar operations and
then simplifies the DAG by folding constants and grouping like terms.
It represents induction variables as recurrent expressions with respect
to a given loop and can simplify DAGs containing recurrent expression by
rewritting the entire DAG to be a recurrent expression with respect to
the same loop.