Track live scalars in VDCE as if they were single element vectors.
Handle the extended instructions for GLSL in VDCE.
Handle composite construct instructions in VDCE.
If one of the operands to an OpVectorTimesScalar instruction is zero,
then the result will be the 0 vector. Currently we do not fold the
insturction unless both operands are constants. This change fixes that.
We also allow folding of OpPhi instructions where the incoming values
are either an OpUndef or the OpPhi instruction itself. As with other
cases, this can be simplified to the OpUndef.
Track live scalars in VDCE as if they were single element vectors.
Handle the extended instructions for GLSL in VDCE.
Handle composite construct instructions in VDCE.
Fixes#1511.
Eliminate unused store to variable if followed by store to same
variable in same block.
Most significantly, this cleans up stores made unused by this pass.
These useless stores can inhibit subsequent optimizations, specifically
LocalSingleStoreElim. Eliminating them makes subsequent optimization more
effective.
The main effect of this pass is to simplify the work done by the SSA
rewriter. It catches many local loads/stores that help speeding up the
work done by the main rewriter.
Introduce a pass that does a DCE type analysis for vector elements
instead of the whole vector as a single element.
It will then rewrite instructions that are not used with something else.
For example, an instruction whose value are not used, even though it is
referenced, is replaced with an OpUndef.
For each function, the analysis determine which SSA registers are live
at the beginning of each basic block and which one are killed at
the end of the basic block.
It also includes utilities to simulate the register pressure for loop
fusion and fission.
The implementation is based on the paper "A non-iterative data-flow
algorithm for computing liveness sets in strict ssa programs" from
Boissinot et al.
* Adds new pass for validating non-uniform group instructions
* Currently on checks execution scope for Vulkan 1.1 and SPIR-V 1.3
* Added test framework
The local-single-store-elim algorithm is not fundamentally bad.
However, when there are a large number of variables, some of the
maps that are used can become very large. These large data structures
then take a very long time to be destroyed. I've seen cases around 40%
if the time.
I've rewritten that algorithm to not use as much memory. This give a
significant improvement when running a large number of shader through
DXC.
I've also made a small change to local-single-block-elim to delete the
loads that is has replaced. That way local-single-store-elim will not
have to look at those. local-single-store-elim now does the same thing.
The time for one set goes from 309s down to 126s. For another set, the
time goes from 102s down to 88s.
GCD MIV test as described in Chapter 3 of "Optimizing Compilers for
Modern Architectures: A Dependence-Based Approach" by Randy Allen, and
Ken Kennedy.
Delta test as described in Figure 3 of "Practical Dependence Testing" by
Gina Goff, Ken Kennedy, and Chau-Wen Tseng from PLDI '91.
* Reworked how execution model limitations are checked
* Now OpFunction checks which entry points call it and checks its
registered limitations instead of building a call stack in the entry
point
* New tests
* Moving function to entry point mapping into VState
Relaxs checks for per-vertex builtin variables. If the builtin
decoration is applied to a variable, then those checks now allow a level
of arraying on the variable before checking the type consistency.
* Allows arrays of variables to be present for the per-vertex variables:
* Position
* PointSize
* ClipDistance
* CullDistance
* Updated tests
Add test for case where OpBranch branches to a value (a function value).
Previous tests only checked a label value (name of a block.).
Update validate_id.cpp to remove the TODO for OpBranch and say that it
is already checked in validate_cfg.cpp
The unordered_set in ADCE that holds all of the live instructions takes
a very long time to be destroyed. In some shaders, it takes over 40% of
the time.
If we look at the unique ids of the live instructions, I believe they
are dense enough make a simple bit vector a good choice for to hold that
data. When I check the density of the bit vector for larger shaders, we
are usually using less than 4 bytes per element in the vector, and
almost always less than 16.
So, in this commit, I introduce a simple bit vector class, and
use it in ADCE.
This help improve the compile time for some shaders on windows by the
40% mentioned above.
Contributes to https://github.com/KhronosGroup/SPIRV-Tools/issues/1328.
For each loop in a function, the pass walks the loops from inner to outer most loop
and tries to peel loop for which a certain amount of iteration can be done before or after the loop.
To limit code growth, peeling will not happen if the growth in code size goes above a configurable threshold.
Provides functionality to perform ZIV and SIV dependency analysis tests
between a load and store within the same loop.
Dependency tests rely on scalar analysis to prove and disprove dependencies
with regard to the loop being analysed.
Based on the 1990 paper Practical Dependence Testing by Goff, Kennedy, Tseng
Adds support for marking loops in the loop nest as IRRELEVANT.
Loops are marked IRRELEVANT if the analysed instructions contain
no induction variables for the loops, i.e. the loops induction
variable is not relevent to the dependence of the store and load.
Adding three rules to fold OpDot (implemented as two).
- When an OpDot has two constants, then fold to the resulting const.
- When one of the inputs is the 0 vector, then fold to zero.
- When one of the inputs is a single 1 with 0s, then rewrite to an
OpCompositeExtract of the appropriate element. This will help find
even more folding opportunities.
Contributes to #709.
According to Vulkan spec 1.1.72:
> The PrimitiveId decoration must be used only within fragment,
> tessellation control, tessellation evaluation, and geometry shaders.
> In a tessellation control or tessellation evaluation shader, any
> variable decorated with PrimitiveId must be declared using the Input
> storage class.
We were enforcing that PrimitiveId can only be used with Output
storage class for TCS and TES before.
From the test case, the slice of the CFG that is interesting for the bug
is
25
|
v
30
|
v
31<-+
| |
v |
34--+
1. In block 25, we have a Phi candidate for %f with arguments
%47 = Phi[%float_0, %0]. This merges %float_0 and a yet unknown
argument from the external loop backedge.
2. We are now processing block 34:
i. The load %35 = OpLoad %f triggers a Phi candidate to be placed in
block 31.
ii. The Phi candidate %50 = Phi needs two arguments. The one coming
from block 30 is %47. But the one coming from block 34 (which we
are now processing and have marked sealed), finds %50 itself as
the reaching def for %f.
3. This wrongfully marks %50 as a copy-of Phi, which ultimately makes
both %47 and %50 copy-of Phis that get eliminated.
Update grammar table generation:
- Get extensions from instructions, not just operand-kinds
- Don't explicitly list extensions that come from the SPIR-V core
grammar or from a KHR extended instruction set grammar.
This makes it easier to support new extensions since the recommended
extension strategy is to add instructions to the core grammar file.
Also, test the validator has trivial support for passing through
the extensions SPV_NV_shader_subgroup_partitioned and
SPV_EXT_descriptor_indexing.
Migrating to unified grammar means we sometimes have two fields
for a certain feature: version and extensions. It means the feature
in question can be used either in SPIR-V of advanced-enough
versions or in any SPIR-V with with the specified extensions.
Validator now respects the above rules.
At every definition of a builtin id, run at-reference-check rules on the
defining instruction as well.
Previosly the validation was missing the case when invalid storage class
was defined in the instruction which defines the built-in, and not in
the instruction which references the built-in.
Refactored validate built-ins to make
GetExecutionModels(entry_point)
and
GetExecutionModes(entry_point)
available in validation state.
Entry points are allowed to have multiple execution modes and execution
models.
Finished the last missing feature in Vulkan built-ins validation:
FragDepth requires DepthReplacing.
Currently OpImageTexelPointer operations are treat like a use of the
pointer, but it does
not look for the memory being referenced to make sure stores are not
removed.
This change teaches it so identify the memory being accessed, and
treats it as if that memory is loaded.
Fixes to #1445.
OpImageTexelPointer acts like a special kind of load. It is not an
array load, but it also cannot be removed the same way a regular
load can. The type of propagation that needs to be done is similar
to what we do for arrays, so I want to merge that code into that
optmization.
Contributers to #1445.
OpImageTexelPointer acts like a special kind of load. It is still
safe to change the storage class of a variable used in a
OpImageTexalPointer instruction.
Contributes to #1445.
CPPreference.com has this description of digits10:
“The value of std::numeric_limits<T>::digits10 is the number of
base-10 digits that can be represented by the type T without change,
that is, any number with this many significant decimal digits can be
converted to a value of type T and back to decimal form, without
change due to rounding or overflow.”
This means that any number with this many digits can be represented
accurately in the corresponding type. A change in any digit in a
number after that may or may not cause it a different bitwise
representation. Therefore this isn’t necessarily enough precision to
accurately represent the value in text. Instead we need max_digits10
which has the following description:
“The value of std::numeric_limits<T>::max_digits10 is the number of
base-10 digits that are necessary to uniquely represent all distinct
values of the type T, such as necessary for
serialization/deserialization to text.”
The patch includes a test case in hex_float_test which tries to do a
round-robin conversion of a number that requires more than 6 decimal
places to be accurately represented. This would fail without the
patch.
Sadly this also breaks a bunch of other tests. Some of the tests in
hex_float_test use ldexp and then compare it with a value which is not
the same as the one returned by ldexp but instead is the value rounded
to 6 decimals. Others use values that are not evenly representable as
a binary floating fraction but then happened to generate the same
value when rounded to 6 decimals. Where the actual value didn’t seem
to matter these have been changed with different values that can be
represented as a binary fraction.
When the original code copies an entire array or struct one element at a
time, this turns into a series of OpCompositeInsert instruction followed
by a store of the whole array. We currently miss opportunities in copy
propagate arrays because we do not recognize this as a copy.
This commit adds code to copy propagate arrays to identify this code
pattern.
Also updates the performance passed to run array copy propagation.
The first implementation of MemroyObject, which is used in copy
propagate arrays, forced the access chain to be like the access chains
in OpCompositeExtract. This excluded the possibility of the memory
object from representing an array element that was extracted with a
variable index. Looking at the code, that restriction is not
neccessary. I also see some opportunities for doing this in some real
shaders.
Contributes to #1430.
This patch adds support for the analysis of scalars in loops. It works
by traversing the defuse chain to build a DAG of scalar operations and
then simplifies the DAG by folding constants and grouping like terms.
It represents induction variables as recurrent expressions with respect
to a given loop and can simplify DAGs containing recurrent expression by
rewritting the entire DAG to be a recurrent expression with respect to
the same loop.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1427
Adjusting validation to the new rule:
"Before version 1.3, it is only valid to use this instruction with
TessellationControl, GLCompute, or Kernel execution models.
There is no such restriction starting with version 1.3."
Also fixed wrong version numbers in source/spirv_target_env.cpp.
When we change the type of an object that gets stored, we do not want to
change the type of the memory location being stored to. In order to
still be able to do the rewrite, we will decompose and rebuild the
object so it is the type that can be stored.
Fixes#1416.
The sprir-v generated from HLSL code contain many copyies of very large
arrays. Not only are these time consumming, but they also cause
problems for drivers because they require too much space.
To work around this, we will implement an array copy propagation. Note
that we will not implement a complete array data flow analysis in order
to implement this. We will be looking for very simple cases:
1) The source must never be stored to.
2) The target must be stored to exactly once.
3) The store to the target must be a store to the entire array, and be a
copy of the entire source.
4) All loads of the target must be dominated by the store.
The hard part is keeping all of the types correct. We do not want to
have to do too large a search to update everything, which may not be
possible, do we give up if we see any instruction that might be hard to
update.
Also in types.h, the element decorations are not stored in an std::map.
This change was done so the hashing algorithm for a Struct is
consistent. With the std::unordered_map, the traversal order was
non-deterministic leading to the same type getting hashed to different
values. See |Struct::GetExtraHashWords|.
Contributes to #1416.
Added a framework for validation of BuiltIn variables. The framework
allows implementation of flexible abstract rules which are required for
built-ins as the information (decoration, definition, reference) is not
in one place, but is scattered all over the module.
Validation rules are implemented as a map
id -> list<functor(instrution)>
Ids which are dependent on built-in types or objects receive a task
list, such as "this id cannot be referenced from function which is
called from entry point with execution model X; propagate this rule
to your descendants in the global scope".
Also refactored test/val/val_fixtures.
All built-ins covered by tests
This patch adds a new option --time-report to spirv-opt. For each pass
executed by spirv-opt, the flag prints resource utilization for the pass
(CPU time, wall time, RSS and page faults)
This fixes issue #1378
This pass replaces the load/store elimination passes. It implements the
SSA re-writing algorithm proposed in
Simple and Efficient Construction of Static Single Assignment Form.
Braun M., Buchwald S., Hack S., Leißa R., Mallon C., Zwinkau A. (2013)
In: Jhala R., De Bosschere K. (eds)
Compiler Construction. CC 2013.
Lecture Notes in Computer Science, vol 7791.
Springer, Berlin, Heidelberg
https://link.springer.com/chapter/10.1007/978-3-642-37051-9_6
In contrast to common eager algorithms based on dominance and dominance
frontier information, this algorithm works backwards from load operations.
When a target variable is loaded, it queries the variable's reaching
definition. If the reaching definition is unknown at the current location,
it searches backwards in the CFG, inserting Phi instructions at join points
in the CFG along the way until it finds the desired store instruction.
The algorithm avoids repeated lookups using memoization.
For reducible CFGs, which are a superset of the structured CFGs in SPIRV,
this algorithm is proven to produce minimal SSA. That is, it inserts the
minimal number of Phi instructions required to ensure the SSA property, but
some Phi instructions may be dead
(https://en.wikipedia.org/wiki/Static_single_assignment_form).
The loop peeler util takes a loop as input and create a new one before.
The iterator of the duplicated loop then set to accommodate the number
of iteration required for the peeling.
The loop peeling pass that decided to do the peeling and profitability
analysis is left for a follow-up PR.
We are seeing shaders that have multiple returns in a functions. These
functions must get inlined for legalization purposes; however, the
inliner does not know how to inline functions that have multiple
returns.
The solution we will go with it to improve the merge return pass to
handle structured control flow.
Note that the merge return pass will assume the cfg has been cleanedup
by dead branch elimination.
Fixes#857.
Previously we keep a separate static grammar table for opcodes/
operands per SPIR-V version. This commit changes that to use a
single unified static grammar table for opcodes/operands.
This essentially changes how grammar facts are queried against
a certain target environment. There are only limited filtering
according to the desired target environment; a symbol is
considered as available as long as:
1. The target environment satisfies the minimal requirement of
the symbol; or
2. There is at least one extension enabling this symbol.
Note that the second rule assumes the extension enabling the
symbol is indeed requested in the SPIR-V code; checking that
should be the validator's work.
Also fixed a few grammar related issues:
* Rounding mode capability requirements are moved to client APIs.
* Reserved symbols not available in any extension is no longer
recognized by assembler.
Strips reflection info. This is limited to decorations and
decoration instructions related to the SPV_GOOGLE_hlsl_functionality1
extension.
It will remove the OpExtension for SPV_GOOGLE_hlsl_functionality1.
It will also remove the OpExtension for SPV_GOOGLE_decorate_string
if there are no further remaining uses of OpDecorateStringGOOGLE.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1398
This reimplementation fixes several issues when removing decorations associated
to an ID (partially addresses #1174 and gives tools for fixing #898), as well
as making it easier to remove groups; a few additional tests have been added.
DecorationManager::RemoveDecoration() will still not delete dead decorations it
created, but I do not think it is its job either; given the following input
```
OpCapability Shader
OpCapability Linkage
OpMemoryModel Logical GLSL450
OpDecorate %2 Restrict
%2 = OpDecorationGroup
OpGroupDecorate %2 %1 %3
OpDecorate %4 Invariant
%4 = OpDecorationGroup
OpGroupDecorate %4 %2
%uint = OpTypeInt 32 0
%1 = OpVariable %uint Uniform
%3 = OpVariable %uint Uniform
```
which of the following two outputs would you expect RemoveDecoration(2) to produce:
```
OpCapability Shader
OpCapability Linkage
OpMemoryModel Logical GLSL450
%uint = OpTypeInt 32 0
%1 = OpVariable %uint Uniform
%3 = OpVariable %uint Uniform
```
or
```
OpCapability Shader
OpCapability Linkage
OpMemoryModel Logical GLSL450
OpDecorate %4 Invariant
%4 = OpDecorationGroup
%uint = OpTypeInt 32 0
%1 = OpVariable %uint Uniform
%3 = OpVariable %uint Uniform
```
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/924
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1174
The default target is SPIR-V 1.3.
For example, spirv-as will generate a SPIR-V 1.3 binary by default.
Use command line option "--target-env spv1.0" if you want to make a SPIR-V
1.0 binary or validate against SPIR-V 1.0 rules.
Example:
# Generate a SPIR-V 1.0 binary instead of SPIR-V 1.3
spirv-as --target-env spv1.0 a.spvasm -o a.spv
spirv-as --target-env vulkan1.0 a.spvasm -o a.spv
# Validate as SPIR-V 1.0.
spirv-val --target-env spv1.0 a.spv
# Validate as Vulkan 1.0
spirv-val --target-env vulkan1.0 a.spv
The merging types we do not remove other information related to the
types. We simply leave it duplicated, and hope it is removed later.
This is what happens with decorations. They are removed in the next
phase of remove duplicates. However, for OpNames that is not the case.
We end up with two different names for the same id, which does not make
sense.
The solution is to remove the names and decorations for the type being
removed instead of rewriting them to refer to the other type.
Note that it is possible that if the first type does not have a name,
then the types will end up with no name. That is fine because the names
should not have any semantic significance anyway.
The was identified in issue #1372, but this does not fix that issue.
* Also mark function parameters as varying
* Conservatively mark assignment instructions as varying if any input is
varying after attempting to fold
* Added a test to catch this case
As per Vulkan spec, BuiltIn variables can't have Location or Component
decorations. On some drivers, these can lead to driver crashing when
compiling the shader pipeline; for example, NVidia/AMD desktop drivers:
https://github.com/KhronosGroup/glslang/issues/1182.
This change adds validation and tests to catch this.
* getFloatConstantKind() now handles OpConstantNull
* PerformOperation() now handles OpConstantNull for vectors
* Fixed some instances where we would attempt to merge a division by 0
* added tests
The algorithm used in DCEInst to remove dead code is very slow. It is
fine if you only want to remove a small number of instructions, but, if
you need to remove a large number of instructions, then the algorithm in
ADCE is much faster.
This PR removes the calls to DCEInst in the load-store removal passes
and adds a pass of ADCE afterwards.
A number of different iterations of the order of optimization, and I
believe this is the best I could find.
The results I have on 3 sets of shaders are:
Legalization:
Set 1: 5.39 -> 5.01
Set 2: 13.98 -> 8.38
Set 3: 98.00 -> 96.26
Performance passes:
Set 1: 6.90 -> 5.23
Set 2: 10.11 -> 6.62
Set 3: 253.69 -> 253.74
Size reduction passes:
Set 1: 7.16 -> 7.25
Set 2: 17.17 -> 16.81
Set 3: 112.06 -> 107.71
Note that the third set's compile time is large because of the large
number of basic blocks, not so much because of the number of
instructions. That is why we don't see much gain there.
Adding basis of arithmetic merging
* Refactored constant collection in ConstantManager
* New rules:
* consecutive negates
* negate of arithmetic op with a constant
* consecutive muls
* reciprocal of div
* Removed IRContext::CanFoldFloatingPoint
* replaced by Instruction::IsFloatingPointFoldingAllowed
* Fixed some bad tests
* added some header comments
Added PerformIntegerOperation
* minor fixes to constants and tests
* fixed IntMultiplyBy1 to work with 64 bit ints
* added tests for integer mul merging
Adding test for vector integer multiply merging
Adding support for merging integer add and sub through negate
* Added tests
Adding rules to merge mult with preceding divide
* Has a couple tests, but needs more
* Added more comments
Fixed bug in integer division folding
* Will no longer merge through integer division if there would be a
remainder in the division
* Added a bunch more tests
Adding rules to merge divide and multiply through divide
* Improved comments
* Added tests
Adding rules to handle mul or div of a negation
* Added tests
Changes for review
* Early exit if no constants are involved in more functions
* fixed some comments
* removed unused declaration
* clarified some logic
Adding new rules for add and subtract
* Fold adds of adds, subtracts or negates
* Fold subtracts of adds, subtracts or negates
* Added tests
This change makes the IR builder use the type manager to generate
OpTypeInts when creating OpConstants. This avoids dangling references
being stored by the created OpConstants.
It moves all conditional branching and switch whose conditions are loop
invariant and uniform. Before performing the loop unswitch we check that
the loop does not contain any instruction that would prevent it
(barriers, group instructions etc.).
Fixes a bug at the same time. In `UpdateDefUse`, if the definition
already exists, we are not suppose to analyse it again. When you do
the entries for the definition are deleted, and we don't want that.
The check for this was wrong.
This function now checks for side-effects before adding operand
instructions to the dead instruction work list.
Because this fix puts more pressure on IsCombinatorInstruction() to
be correct, this commit adds all OpConstant* and OpType* instructions
to combinator_ops_ set.
Fixes#1341.
This change implements instruction folding for arithmetic operations
that are redundant, specifically:
x + 0 = 0 + x = x
x - 0 = x
0 - x = -x
x * 0 = 0 * x = 0
x * 1 = 1 * x = x
0 / x = 0
x / 1 = x
mix(a, b, 0) = a
mix(a, b, 1) = b
Cache ExtInst import id in feature manager
This allows us to avoid string lookups during optimization; for now we
just cache GLSL std450 import id but I can imagine caching more sets as
they become utilized by the optimizer.
Add tests for add/sub/mul/div/mix folding
The tests cover scalar float/double cases, and some vector cases.
Since most of the code for floating point folding is shared, the tests
for vector folding are not as exhaustive as scalar.
To test sub->negate folding I had to implement a custom fixture.
I mixed up two cases when folding an OpCompositeExtract that is feed by
and OpCompositeInsert. The specific cases are demonstracted in the new
test. I mixed up the conditions for the cases, and treated one like the
other.
Fixes#1323.
* Now track propagation status and assert on bad statuses
* Added helper methods to access instruction propagation status
* Modified the phi meet operator to properly reflect the paper it is
based on
* Modified SSA edge addition so that all edge are added, but only on
state changes
* Fixed a bug in instruction simulation where interesting conditional
branches would not mark the interesting edge as executed
* Added a test to catch this bug
* Added an ostream operator for SSAPropagator::PropStatus
This change handles all 6 regular comparison types in two variations,
ordered (true if values are ordered *and* comparison is true) and
unordered (true if values are unordered *or* comparison is true).
Ordered comparison matches the default floating-point behavior on host
but we use std::isnan to check ordering explicitly anyway.
This change also slightly reworks the floating-point folding support
code to make it possible to define a folding operation that returns
boolean instead of floating point.
These tests exhaustively test ordered/unordered comparisons for
float/double.
Since for NaN inputs the comparison result doesn't depend on the
comparison function, we just test == and !=; NaN inputs result in true
unordered comparisons and false ordered comparisons.
In dead branch elimination, we already recognize unreachable continue
blocks, and update OpPhi instruction accordingly. This change adds an
extra check: if the head block has exactly 1 other incoming edge, then
replace the OpPhi with the value from that edge.
Fixes#1314.
unordered_map is not POD. Using it as static may cause problems
when operator new() and operator delete() is customized.
Also changed some function signatures to use const char* instead
of std::string, which will give caller the flexibility to avoid
creating a std::string.
We can fold OpSelect into one of the operands in two cases:
- condition is constant
- both results are the same
Even if the original shader doesn't have either of these, if-conversion
pass sometimes ends up generating instructions like
%7127 = OpSelect %int %3220 %7058 %7058
And this optimization cleans them up.
This patch adds initial support for loop unrolling in the form of a
series of utility classes which perform the unrolling. The pass can
be run with the command spirv-opt --loop-unroll. This will unroll
loops within the module which have the unroll hint set. The unroller
imposes a number of requirements on the loops it can unroll. These are
documented in the comments for the LoopUtils::CanPerformUnroll method in
loop_utils.h. Some of the restrictions will be lifted in future patches.
Implementation of the simplification pass.
- Create pass that calls the instruction folder on each instruction and
propagate instructions that fold to a copy. This will do copy
propagation as well.
- Did not use the propagator engine because I want to modify the instruction
as we go along.
- Change folding to not allocate new instructions, but make changes in
place. This change had a big impact on compile time.
- Add simplification pass to the legalization passes in place of
insert-extract elimination.
- Added test cases for new folding rules.
- Added tests for the simplification pass
- Added a method to the CFG to apply a function to the basic blocks in
reverse post order.
Contributes to #1164.
Add pkg-config file for shared libraries
Properly build SPIRV-Tools DLL
Test C interface with shared library
Set PATH to shared library file for c_interface_shared test
Otherwise, the test won't find SPIRV-Tools-shared.dll.
Do not use private functions when testing with shared library
Make all symbols hidden by default for shared library target
* Added TypeManager::RebuildType
* rebuilds the type and its constituent types in terms of memory owned
by the manager.
* Used by TypeManager::RegisterType to properly allocate memory
* Adding an unit test to expose the issue
* Added some tests to provide coverage of RebuildType
* Added an accessor to the target pointer for a forward pointer
Create the folding engine that will
1) attempt to fold an instruction.
2) iterates on the folding so small folding rules can be easily combined.
3) insert new instructions when needed.
I've added the minimum number of rules needed to test the features above.
This patch adds LoopUtils class to handle some loop related transformations. For now it has 2 transformations that simplifies other transformations such as loop unroll or unswitch:
- Dedicate exit blocks: this ensure that all exit basic block
(out-of-loop basic blocks that have a predecessor in the loop)
have all their predecessors in the loop;
- Loop Closed SSA (LCSSA): this ensure that all definitions in a loop are used inside the loop
or in a phi instruction in an exit basic block.
It also adds the following capabilities:
- Loop::IsLCSSA to test if the loop is in a LCSSA form
- Loop::GetOrCreatePreHeaderBlock that can build a loop preheader if required;
- New methods to allow on the fly updates of the loop descriptors.
- New methods to allow on the fly updates of the CFG analysis.
- Instruction::SetOperand to allow expression of the index relative to Instruction::NumOperands (to be compatible with the index returned by DefUseManager::ForEachUse)
Creates a pass that will remove instructions that are invalid for the
current shader stage. For the instruction to be considered for replacement
1) The opcode must be valid for a shader modules.
2) The opcode must be invalid for the current shader stage.
3) All entry points to the module must be for the same shader stage.
4) The function containing the instruction must be reachable from an entry point.
Fixes#1247.
* Had to remove templating from InstructionBuilder as a result
* now preserved analyses are specified as a constructor argument
* updated tests and uses
* changed static_assert to a runtime assert
* this should probably get further changes in the future
* When handling unreachable merges and continues, do not optimize to the
same IR
* pass did not check whether the unreachable blocks were in the
optimized form before transforming them
* added a test to catch this issue
* Should handle all possibilities
* Stricter checks for what is disallowed:
* header and header
* merge and merge
* Allow header and merge blocks to be merged
* Erases the structured control declaration if merging header and
merge blocks together.
* If the dead branch elim is performed on a module without structured
control flow, the OpSelectionMerge may not be present
* Add a check for pointer validity before dereferencing
* Added a test to catch the bug
* Forces traversal of phis if the def has changed to varying
* Mark a phi as varying if all incoming values are varying
* added a test to catch the bug
This adds Dead Insert Elimination to the end of the
--eliminate-insert-extract pass. See the new tests for examples of code
that will benefit.
Essentially, this removes OpCompositeInsert instructions which are not
used, either because there is no instruction which uses the value at the
index it is inserted, or because a subsequent insert intercepts any such
use.
This code has been seen to remove significant amounts of dead code from
real-life HLSL shaders being ported to Vulkan. In fact, it is needed to
remove dead texture samples which cause Vulkan validation layer errors
(unbound textures and samplers) if not removed . Such DCE is thus
required for fxc equivalence and legalization.
This analysis operates across "chains" of Inserts which can also contain
Phi instructions.
* Handles simple cases only
* Identifies phis in blocks with two predecessors and attempts to
convert the phi to an select
* does not perform code motion currently so the converted values must
dominate the join point (e.g. can't be defined in the branches)
* limited for now to two predecessors, but can be extended to handle
more cases
* Adding if conversion to -O and -Os
Ban floating point case for OpAtomicLoad, OpAtomicExchange,
OpAtomicCompareExchange. In graphics (Shader) environments, these
instructions only operate on scalar integers. Ban the floating point
case. OpenCL supports atomic_float.
Implemented Vulkan-specific rules:
- OpTypeImage must declare a scalar 32-bit float or 32-bit integer type
for the “Sampled Type”.
- OpSampledImage must only consume an “Image” operand whose type has its
“Sampled” operand set to 1.
The current folding routines have a very cumbersome interface, make them
harder to use, and not a obvious how to extend.
This change is to create a new interface for the folding routines, and
show how it can be used by calling it from CCP.
This does not make a significant change to the behaviour of CCP. In
general it should produce the same code as before; however it is
possible that an instruction that takes 32-bit integers as inputs and
the result is not a 32-bit integer or bool will not be folded as before.
It seems like andriod has a problem with INT32_MAX and the like. I'll
explicitly define those if the are not already defined.
The class factorize the instruction building process.
Def-use manager analysis can be updated on the fly to maintain coherency.
To be updated to take into account more analysis.
* AddToWorklist can now be called unconditionally
* It will only add instructions that have not already been marked as
live
* Fixes a case where a merge was not added to the worklist because the
branch was already marked as live
* Added two similar tests that fail without the fix
We have come across a driver bug where and OpUnreachable inside a loop
is causing the shader to go into an infinite loop. This commit will try
to avoid this bug by turning OpUnreachable instructions that are
contained in a loop into branches to the loop merge block.
This is not added to "-O" and "-Os" because it should only be used if
the driver being targeted has this problem.
Fixes#1209.
At the moment specialization constants look like constants to ccp. This
causes a problem because they are handled differently by the constant
manager.
I choose to simply skip over them, and not try to add them to the value
table. We can do specialization before ccp if we want to be able to
propagate these values.
Fixes#1199.
The current code expects the users of the constant manager to initialize
it with all of the constants in the module. The problem is that you do
not want to redo the work multiple times. So I decided to move that
code to the constructor of the constant manager. This way it will
always be initialized on first use.
I also removed an assert that expects all constant instructions to be
successfully mapped. This is because not all OpConstant* instruction
can map to a constant, and neither do the OpSpecConstant* instructions.
The real problem is that an OpConstantComposite can contain a member
that is OpUndef. I tried to treat OpUndef like OpConstantNull, but this
failed because an OpSpecConstantComposite with an OpUndef cannot be
changed to an OpConstantComposite. Since I feel this case will not be
common, I decided to not complicate the code.
Fixes#1193.
* Added for Instruction, BasicBlock, Function and Module
* Uses new disassembly functionality that can disassemble individual
instructions
* For debug use only (no caching is done)
* Each output converts module to binary, parses and outputs an
individual instruction
* Added a test for whole module output
* Disabling Microsoft checked iterator warnings
* Updated check_copyright.py to accept 2018
* Changed MemPass::InsertPhiInstructions to set basic blocks for new
phis
* Local SSA elim now maintains instr to block mapping
* Added a test and confirmed it fails without the updated phis
* IRContext::set_instr_block no longer builds the map if the analysis is
invalid
* Added instruction to block mapping verification to
IRContext::IsConsistent()
This improves Extract replacement to continue through VectorShuffle.
It will also handle Mix with 0.0 or 1.0 in the a-value of the desired
component.
To facilitate optimization of VectorShuffle, the algorithm was refactored
to pass around the indices of the extract in a vector rather than pass the
extract instruction itself. This allows the indices to be modified as the
algorithm progresses.
Modified ADCE to remove dead globals.
* Entry point and execution mode instructions are marked as alive
* Reachable functions and their parameters are marked as alive
* Instruction deletion now deferred until the end of the pass
* Eliminated dead insts set, added IsDead to calculate that value
instead
* Ported applicable dead variable elimination tests
* Ported dead constant elim tests
Added dead function elimination to ADCE
* ported dead function elim tests
Added handling of decoration groups in ADCE
* Uses a custom sorter to traverse decorations in a specific order
* Simplifies necessary checks
Updated -O and -Os pass lists.
Pass now paints live blocks and fixes constant branches and switches as
it goes. No longer requires structured control flow. It also removes
unreachable blocks as a side effect. It fixes the IR (phis) before doing
any code removal (other than terminator changes).
Added several unit tests for updated/new functionality.
Does not remove dead edge from a phi node:
* Checks that incoming edges are live in order to retain them
* Added BasicBlock::IsSuccessor
* added test
Fixing phi updates in the presence of extra backedge blocks
* Added tests to catch bug
Reworked how phis are updated
* Instead of creating a new Phi and RAUW'ing the old phi with it, I now
replace the phi operands, but maintain the def/use manager correctly.
For unreachable merge:
* When considering unreachable continue blocks the code now properly
checks whether the incoming edge will continue to be live.
Major refactoring for review
* Broke into 4 major functions
* marking live blocks
* marking structured targets
* fixing phis
* deleting blocks
This fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1143.
When an instruction transitions from constant to bottom (varying) in the
lattice, we were telling the propagator that the instruction was
varying, but never updating the actual value in the values table.
This led to incorrect value substitutions at the end of propagation.
The patch also re-enables CCP in -O and -Os.
In HLSL structured buffer legalization, pointer to pointer types
are emitted to indicate a structured buffer variable should be
treated as an alias of some other variable. We need an option to
relax the check of pointer types in logical addressing mode to
catch other validation errors.
Add post-order tree iterator.
Add DominatorTreeNode extensions:
- Add begin/end methods to do pre-order and post-order tree traversal from a given DominatorTreeNode
Add DominatorTree extensions:
- Add begin/end methods to do pre-order and post-order tree traversal
- Tree traversal ignore by default the pseudo entry block
- Retrieve a DominatorTreeNode from a basic block
Add loop descriptor:
- Add a LoopDescriptor class to register all loops in a given function.
- Add a Loop class to describe a loop:
- Loop parent
- Nested loops
- Loop depth
- Loop header, merge, continue and preheader
- Basic blocks that belong to the loop
Correct a bug that forced dominator tree to be constantly rebuilt.
Turn `Linker::Link()` into free functions
As very little information was kept in the Linker class, we can get rid
of the whole class and have the `Link()` as free functions instead; the
environment target as well as the consumer are passed along through an
`spv_context` object.
The resulting linked_binary is passed as a pointer rather than a
reference to follow the Google C++ Style guidelines.
Addresses remaining comments from
https://github.com/KhronosGroup/SPIRV-Tools/pull/693 about the SPIR-V
linker.
Fix variable naming in the linker
Some of the variables were using mixed case, which did not follow the
Google C++ Style guidelines.
Linker: Use EXPECT_EQ when possible and update some test
* Replace occurrences of ASSERT_EQ by EXPECT_EQ when possible;
* Reformulated some of the error messages;
* Added the symbol name in the error message when there is a type or
decoration mismatch between the imported and exported declarations.
Opt: List all duplicates removed by RemoveDuplicatePass in the header
Opt: Make the const version of GetLabelInst() return a pointer
For consistency with the non-const version, as well as other similar
functions.
Opt: Rename function_end to EndInst()
As pointed out by dneto0 the previous name was quite confusing and could
be mistaken with a function returning an end iterator.
Also change the return type of the const version to a pointer rather
than a reference, for consistency.
Opt: Add performance comment to RemoveDuplicateTypes and decorations
This comment was requested during the review of
https://github.com/KhronosGroup/SPIRV-Tools/pull/693.
Opt: Add comments and fix variable naming in RemoveDuplicatePass
* Add missing comments to private functions;
* Rename variables that were using mixed case;
* Add TODO for moving AreTypesEqual out.
Linker: Remove commented out code and add TODOs
Linker: Merged together strings that were too much splitted
Implement a C++ RAII wrapper around spv_context
In value numbering, we treat loads and stores of images, ie OpImageLoad,
as a memory operation where it is interested in the "base address" of
the instruction. In those cases, it is an image instruction.
The problem is that `Instruction::GetBaseAddress()` does not account for
the image instructions, so the assert at the end to make sure it found
a valid base address for its addressing mode fails.
The solution is to look at the load/store instruction to determine how
the assertion should be done.
Fixes#1160.
This fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/1159. I
had missed a nuance in the original algorithm. When simulating Phi
instructions, the SSA edges out of a Phi instruction should never be
added to the list of edges to simulate.
Phi instructions can be in SSA def-use cycles with other Phi
instructions. This was causing the propagator to fall into an infinite
loop when the same def-use edge kept being added to the queue.
The original algorithm in the paper specifically separates the visit of
a Phi instruction vs the visit of a regular instruction. This fix makes
the implementation match the original algorithm.
1. Added OpCompositeExtract/Insert out of bounds checks where possible
(everything except RuntimeArray)
2. Moved validation of OpCompositeExtract/Insert from validate_id.cpp to
validate_composites.cpp.
In CCP we should not need to insert Phi nodes because CCP never looks at
loads/stores. This required adjusting two tests that relied on Phi
instructions being inserted. I changed the tests to have the Phi
instructions pre-inserted.
I also added a new test to make sure that CCP does not try to look
through stores and loads.
Finally, given that CCP does not handle loads/stores, it's better to run
mem2reg before it. I've changed the -O/-Os schedules to run local
multi-store elimination before CCP.
Although this is just an efficiency fix for CCP, it is
also working around a bug in Phi insertion. When Phi instructions are
inserted, they are never associated a basic block. This causes a
segfault when the propagator tries to lookup CFG edges when analyzing
Phi instructions.
Add grammar file for DebugInfo extended instruction set
- Each new operand enum kind in extinst.debuginfo.grammar.json maps
to a new value in spv_operand_type_t.
- Add new concrete enum operand types for DebugInfo
Generate a C header for the DebugInfo extended instruction set
Add table lookup of DebugInfo extended instrutions
Handle the debug info operand types in binary parser,
disassembler, and assembler.
Add DebugInfo round trip tests for assembler, disassembler
Android.mk: Support DebugInfo extended instruction set
The extinst.debuginfo.grammar.json file is currently part of
SPIRV-Tools source.
It contributes operand type enums, so it has to be processed
along with the core grammar files.
We also generate a C header DebugInfo.h.
Add necessary grammar file processing to Android.mk.
This implements the conditional constant propagation pass proposed in
Constant propagation with conditional branches,
Wegman and Zadeck, ACM TOPLAS 13(2):181-210.
The main logic resides in CCPPass::VisitInstruction. Instruction that
may produce a constant value are evaluated with the constant folder. If
they produce a new constant, the instruction is considered interesting.
Otherwise, it's considered varying (for unfoldable instructions) or
just not interesting (when not enough operands have a constant value).
The other main piece of logic is in CCPPass::VisitBranch. This
evaluates the selector of the branch. When it's found to be a known
value, it computes the destination basic block and sets it. This tells
the propagator which branches to follow.
The patch required extensions to the constant manager as well. Instead
of hashing the Constant pointers, this patch changes the constant pool
to hash the contents of the Constant. This allows the lookups to be
done using the actual values of the Constant, preventing duplicate
definitions.
In order to keep track of all of the implicit capabilities as well as
the explicit ones, we will add them all to the feature manager. That is
the object that needs to be queried when checking if a capability is
enabled.
The name of the "HasCapability" function in the module was changed to
make it more obvious that it does not check for implied capabilities.
Keep an spv_context and AssemblyGrammar in IRContext
* changed the way duplicate types are removed to stop copying
instructions
* Reworked RemoveDuplicatesPass::AreTypesSame to use type manager and
type equality
* Reworked TypeManager memory management to store a pool of unique
pointers of types
* removed unique pointers from id map
* fixed instances where free'd memory could be accessed
A few optimizations are updates to handle code that is suppose to be
using the logical addressing mode, but still has variables that contain
pointers as long as the pointer are to opaque objects. This is called
"relaxed logical addressing".
|Instruction::GetBaseAddress| will check that pointers that are use meet
the relaxed logical addressing rules. Optimization that now handle
relaxed logical addressing instead of logical addressing are:
- aggressive dead-code elimination
- local access chain convert
- local store elimination passes.
When a private variable is used in a single function, it can be
converted to a function scope variable in that function. This adds a
pass that does that. The pass can be enabled using the option
`--private-to-local`.
This transformation allows other transformations to act on these
variables.
Also moved `FindPointerToType` from the inline class to the type manager.
- Test validation success for OpEmitVertex OpEndPrimitive
- Test missing capabilities for primitive instructions
- Primitive instructions require Geometry execution model
@ehsannas had filed an issue against SPIR-V spec, concerning
Image Operands section (3.14):
Sample
A following operand is the sample number of the sample to use. Only
valid with OpImageFetch, OpImageRead, and OpImageWrite.
Relaxing the check to allow OpImageSparseRead and
OpImageSparseFetch to fix failing tests.
types. This allows the lookup of type declaration ids from arbitrarily
constructed types. Users should be cautious when dealing with non-unique
types (structs and potentially pointers) to get the exact id if
necessary.
* Changed the spec composite constant folder to handle ambiguous composites
* Added functionality to create necessary instructions for a type
* Added ability to remove ids from the type manager
This fixes issue #1075
- Mark continue when conditional branch with merge block.
Only mark if merge block is not continue block.
- Handle conditional branch break with preceding merge
Inlining is not setting the parent (function) for each basic block.
This can cause problems for later optimizations. The solution is to set
the parent for each new block just before it is linked into the
function.
include: Add target environment enums for OpenCL 1.2 and 2.0
Validator: Validate OpenCL capabilities
Update validate capabilities to handle embedded profiles
Add test for OpenCL capabilities validation
Update messages to mention the OpenCL profile used
Re-format val_capability_test.cpp
Adds a scalar replacement pass. The pass considers all function scope
variables of composite type. If there are accesses to individual
elements (and it is legal) the pass replaces the variable with a
variable for each composite element and updates all the uses.
Added the pass to -O
Added NumUses and NumUsers to DefUseManager
Added some helper methods for the inst to block mapping in context
Added some helper methods for specific constant types
No longer generate duplicate pointer types.
* Now searches for an existing pointer of the appropriate type instead
of failing validation
* Fixed spec constant extracts
* Addressed changes for review
* Changed RunSinglePassAndMatch to be able to run validation
* current users do not enable it
Added handling of acceptable decorations.
* Decorations are also transfered where appropriate
Refactored extension checking into FeatureManager
* Context now owns a feature manager
* consciously NOT an analysis
* added some test
* fixed some minor issues related to decorates
* added some decorate related tests for scalar replacement
Adds a pass that looks for redundant instruction in a function, and
removes them. The algorithm is a hash table based value numbering
algorithm that traverses the dominator tree.
This pass removes completely redundant instructions, not partially
redundant ones.
Currently when inlining a call, the name and decorations for the result of the
call is not deleted. This should be changed. Added a test for this as well.
This fixes issue #622.
Support for dominator and post dominator analysis on ir::Functions. This patch contains a DominatorTree class for building the tree and DominatorAnalysis and DominatorAnalysisPass classes for interfacing and caching the built trees.
The current method of removing an instruction is to call ToNop. The
problem with this is that it leaves around an instruction that later
passes will look at. We should just delete the instruction.
In MemPass there is a utility routine called DCEInst. It can delete
essentially any instruction, which can invalidate pointers now that they
are actually deleted. The interface was changed to add a call back that
can be used to update any local data structures that contain
ir::Intruction*.
Computing the value numbers on demand, as we do now, can lead to
different results depending on the order in which the users asks for
the value numbers. To make things more stable, we compute them ahead
of time.
This needs custom code since the rules from the extension
are not encoded in the grammar.
Changes are:
- The new group instructions don't require Group capability
when the extension is declared.
- The Reduce, InclusiveScan, ExclusiveScan normally require the Kernel
capability, but don't when the extension is declared.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/991
This class implements a generic value propagation algorithm based on the
conditional constant propagation algorithm proposed in
Constant propagation with conditional branches,
Wegman and Zadeck, ACM TOPLAS 13(2):181-210.
The implementation is based on
A Propagation Engine for GCC
Diego Novillo, GCC Summit 2005
http://ols.fedoraproject.org/GCC/Reprints-2005/novillo-Reprint.pdf
The purpose of this implementation is to act as a common framework for any
transformation that needs to propagate values from statements producing new
values to statements using those values.
Re-formatted the source tree with the command:
$ /usr/bin/clang-format -style=file -i \
$(find include source tools test utils -name '*.cpp' -or -name '*.h')
This required a fix to source/val/decoration.h. It was not including
spirv.h, which broke builds when the #include headers were re-ordered by
clang-format.
Removed the check that result type of OpImageRead should be a vector4.
Will reenable/adapt once the spec is clarified on what the right
dimension should be.
Replaced representation of uses
* Changed uses from unordered_map<uint32_t, UseList> to
set<pairInstruction*, Instruction*>>
* Replaced GetUses with ForEachUser and ForEachUse functions
* updated passes to use new functions
* partially updated tests
* lots of cleanup still todo
Adding an unique id to Instruction generated by IRContext
Each instruction is given an unique id that can be used for ordering
purposes. The ids are generated via the IRContext.
Major changes:
* Instructions now contain a uint32_t for unique id and a cached context
pointer
* Most constructors have been modified to take a context as input
* unfortunately I cannot remove the default and copy constructors, but
developers should avoid these
* Added accessors to parents of basic block and function
* Removed the copy constructors for BasicBlock and Function and replaced
them with Clone functions
* Reworked BuildModule to return an IRContext owning the built module
* Since all instructions require a context, the context now becomes the
basic unit for IR
* Added a constructor to context to create an owned module internally
* Replaced uses of Instruction's copy constructor with Clone whereever I
found them
* Reworked the linker functionality to perform clones into a different
context instead of moves
* Updated many tests to be consistent with the above changes
* Still need to add new tests to cover added functionality
* Added comparison operators to Instruction
Adding tests for Instruction, IRContext and IR loading
Fixed some header comments for BuildModule
Fixes to get tests passing again
* Reordered two linker steps to avoid use/def problems
* Fixed def/use manager uses in merge return pass
* Added early return for GetAnnotations
* Changed uses of Instruction::ToNop in passes to IRContext::KillInst
Simplifying the uses for some contexts in passes
Creates a pass that removes redundant instructions within the same basic
block. This will be implemented using a hash based value numbering
algorithm.
Added a number of functions that check for the Vulkan descriptor types.
These are used to determine if we are variables are read-only or not.
Implemented a function to check if loads and variables are read-only.
Implemented kernel specific and shader specific versions.
A big change is that the Combinator analysis in ADCE is factored out
into the IRContext as an analysis. This was done because it is being
reused in the value number table.
Add new "short descriptor" algorithm to MARK-V codec.
Add three shader compression models:
lite - fast, poor compression
mid - balanced
max - best compression
Each instruction is given an unique id that can be used for ordering
purposes. The ids are generated via the IRContext.
Major changes:
* Instructions now contain a uint32_t for unique id and a cached context
pointer
* Most constructors have been modified to take a context as input
* unfortunately I cannot remove the default and copy constructors, but
developers should avoid these
* Added accessors to parents of basic block and function
* Removed the copy constructors for BasicBlock and Function and replaced
them with Clone functions
* Reworked BuildModule to return an IRContext owning the built module
* Since all instructions require a context, the context now becomes the
basic unit for IR
* Added a constructor to context to create an owned module internally
* Replaced uses of Instruction's copy constructor with Clone whereever I
found them
* Reworked the linker functionality to perform clones into a different
context instead of moves
* Updated many tests to be consistent with the above changes
* Still need to add new tests to cover added functionality
* Added comparison operators to Instruction
* Added an internal option to LinkerOptions to verify merged ids are
unique
* Added a test for the linker to verify merged ids are unique
* Updated MergeReturnPass to supply a context
* Updated DecorationManager to supply a context for cloned decorations
* Reworked several portions of the def use tests in anticipation of next
set of changes
If SPIRV-Tools is used as an external project and have
googletest being kept in the same directory as it, we
won't have gmock-matchers.h in external/. This will
result in a compilation error.
Use gmock.h instead.
To make the decoration manger available everywhere, and to reduce the
number of times it needs to be build, I add one the IRContext.
As the same time, I move code that modifies decoration instruction into
the IRContext from mempass and the decoration manager. This will make
it easier to keep everything up to date.
This should take care of issue #928.
Works with current DefUseManager infrastructure.
Added merge return to the standard opts.
Added validation to passes.
Disabled pass for shader capabilty.
This analysis builds a map from instructions to the basic block that
contains them. It is accessed via get_instr_block(). Once built, it is kept
up-to-date by the IRContext, as long as instructions are removed via
KillInst.
I have not yet marked passes that preserve this analysis. I will do it
in a separate change.
Other changes:
- Add documentation about analysis values requirement to be powers of 2.
- Force a re-build of the def-use manager in tests.
- Fix AllPreserveFirstOnlyAfterPassWithChange to use the
DummyPassPreservesFirst pass.
- Fix sentinel value for IRContext::Analysis enum.
- Fix logic for checking if the instr<->block mapping is valid in KillInst.
Fixes issue #728. Currently the inliner is not generating decorations for
inlined code which corresponds to function code which has decorations. An
example of decorations that are relevant: RelaxedPrecision, NoContraction.
The solution is to replicate the decoration during inlining.
Add Effcee as an optional dependency for use in tests. In future it will
be a required dependency.
Effcee is a stateful pattern matcher that has much of the functionality
of LLVM's FileCheck, except in library form. Effcee makes it much easier
to write tests for optimization passes.
Demonstrate its use in a test for the strength-reduction pass.
Update README.md with example commands of how to get sources.
Update Appveyor and Travis-CI build rules.
Also: Include test libraries if not SPIRV_SKIP_TESTS
- SPIRV_SKIP_TESTS is implied by SPIRV_SKIP_EXECUTABLES
This change will move the instances of the def-use manager to the
IRContext. This allows it to persists across optimization, and does
not have to be rebuilt multiple times.
Added test to ensure that the IRContext is validating and invalidating
the analyses correctly.
This is the first part of adding the IRContext. This class is meant to
hold the extra data that is build on top of the module that it
owns.
The first part will simply create the IRContext class and get it passed
to the passes in place of the module. For now it does not have any
functionality of its own, but it acts more as a wrapper for the module.
The functions that I added to the IRContext are those that either
traverse the headers or add to them. I did this because we may decide
to have other ways of dealing with these sections (for example adding a
type pool, or use the decoration manager).
I also added the function that add to the header because the IRContext
needs to know when an instruction is added to update other data
structures appropriately.
Note that there is still lots of work that needs to be done. There are
still many places that change the module, and do not inform the context.
That will be the next step.
Mark structured conditional branches live only if one or more instructions
in their associated construct is marked live. After closure, replace dead
structured conditional branches with a branch to its merge and remove
dead blocks.
ADCE: Dead If Elim: Remove duplicate StructuredOrder code
Also generalize ComputeStructuredOrder so that the caller can specify the
root block for the order. Phi insertion uses pseudo_entry_block and adce and
dead branch elim use the first block of the function.
ADCE: Dead If Elim: Pull redundant code out of InsertPhiInstructions
ADCE: Dead If Elim: Encapsulate CFG Cleanup Initialization
ADCE: Dead If Elim: Remove redundant code from ADCE initialization
ADCE: Dead If: Use CFGCleanup to eliminate newly dead blocks
Moved bulk of CFG Cleanup code into MemPass.
There are a number of users of spriv-opt that are hitting errors
because of stores with different types. In general, this is wrong, but,
in these cases, the types are the exact same except for decorations.
The options is "--relax-store-struct", and it can be used with the
validator or the optimizer.
We assume that if layout information is missing it is consistent. For
example if one struct has a offset of one of its members, and the other
one does not, we will still consider them as being layout compatible.
The problem will be if both struct has and offset decoration for
corresponding members, and the offset are different.
This change will replace a number of the
std::vector<std::unique_ptr<Instruction>> member of the module to
InstructionList. This is for consistency and to make it easier to
delete instructions that are no longer needed.
Function static non-POD data causes problems with DLL lifetime.
This pull request turns all static info tables into strict POD
tables. Specifically, the capabilities/extensions field of
opcode/operand/extended-instruction table are turned into two
fields, one for the count and the other a pointer to an array of
capabilities/extensions. CapabilitySet/EnumSet are not used in
the static table anymore, but they are still used for checking
inclusion by constructing on the fly, which should be cheap for
the majority cases.
Also moves all these tables into the global namespace to avoid
C++11 function static thread-safe initialization overhead.
Markv codec now receives two optional callbacks:
LogConsumer for internal codec logging
DebugConsumer for testing if encoding->decoding produces the original
results.
There does not seem to be any pass that remove global variables. I
think we could use one. This pass will look specifically for global
variables that are not referenced and are not exported. Any decoration
associated with the variable will also be removed. However, this could
cause types or constants to become unreferenced. They will not be
removed. Another pass will have to be called to remove those.
The pass checks correctness of operands of instruction in opcode range
OpConvertFToU - OpBitset.
Disabled invalid tests
Disabled UConvert validation until Vulkan CTS can catch up.
Add validate_conversion to Android.mk
Also remove duplicate entry in CMakeLists.txt.
This is the first step in replacing the std::vector of Instruction
pointers to using and intrusive linked list.
To this end, we created the InstructionList class. It inherites from
the IntrusiveList class, but add the extra concept of ownership. An
InstructionList owns the instruction that are in it. This is to be
consistent with the current ownership rules where the vector owns the
instruction that are in it.
The other larger change is that the inst_ member of the BasicBlock class
was changed to using the InstructionList class.
Added test for the InsertBefore functions, and making sure that the
InstructionList destructor will delete the elements that it contains.
I've also add extra comments to explain ownership a little better.
- Adds a new pass CFGCleanupPass. This serves as an umbrella pass to
remove unnecessary cruft from a CFG.
- Currently, the only cleanup operation done is the removal of
unreachable basic blocks.
- Adds unit tests.
- Adds a flag to spirvopt to execute the pass (--cfg-cleanup).
- switched from C to C++
- moved MARK-V model creation from backend to frontend
- The same MARK-V model object can be used to encode/decode multiple
files
- Added MARK-V model factory (currently only one option)
- Added --validate option to spirv-markv (run validation while
encoding/decoding)
This commit is the initial implementation of the intrusive linked list
class. It includes the implementation in the header files, and unit
test.
The iterators are circular: incrementing end() gives begin() and
decrementing begin() gives end(). Also made it valid to
decrement end().
Expliticly defines move constructor and move assignment
- Visual Studio 2013 does not implicitly generate the move constructor or
move assignments. So they need to be explicit, otherwise it will try to
use the copy constructor, which we explicitly deleted.
- Can't use "= default" either.
Seems like VS2013 does not support explicitly using the default move
constructors and move assignments, so I wrote them out.
Expands dead branch elimination to eliminate dead switch cases. It also
changes dbe to eliminate orphaned merge blocks and recursively eliminate
any blocks thereby orphaned.
Add extra iterators for ir::Module's sections
Add extra getters to ir::Function
Add a const version of BasicBlock::GetLabelInst()
Use the max of all inputs' version as version
Split debug in debug1 and debug2
- Debug1 instructions have to be placed before debug2 instructions.
Error out if different addressing or memory models are found
Exit early if no binaries were given
Error out if entry points are redeclared
Implement copy ctors for Function and BasicBlock
- Visual Studio ends up generating copy constructors that call deleted
functions while compiling the linker code, while GCC and clang do not.
So explicitly write those functions to avoid Visual Studio messing up.
Move removing duplicate capabilities to its own pass
Add functions running on all IDs present in an instruction
Remove duplicate SpvOpExtInstImport
Give default options value for link functions
Remove linkage capability if not making a library
Check types before allowing to link
Detect if two types/variables/functions have different decorations
Remove decorations of imported variables/functions and their types
Add a DecorationManager
Add a method for removing all decorations of id
Add methods for removing operands from instructions
Error out if one of the modules has a non-zero schema
Update README.md to talk about the linker
Do not freak out if an imported built-in variable has no export
Creates a pass called eliminate dead functions that looks for functions
that could never be called, and deletes them from the module.
To support this change a new function was added to the Pass class to
traverse the call trees from diffent starting points.
Includes a test to ensure that annotations are removed when deleting a
dead function. They were not, so fixed that up as well.
Did some cleanup of the assembly for the test in pass_test.cpp. Trying
to make them smaller and easier to read.
Create a new optimization pass, strength reduction, which will replace
integer multiplication by a constant power of 2 with an equivalent bit
shift. More changes could be added later.
- Does not duplicate constants
- Adds vector |Concat| utility function to a common test header.
Includes:
- Multi-sequence move-to-front
- Coding by id descriptor
- Statistical coding of non-id words
- Joint coding of opcode and num_operands
Removed explicit form Huffman codec constructor
- The standard use case for it is to be constructed from initializer list.
Using serialization for Huffman codecs
This adapts the fix for the single-block loop. Split the loop like
before. But when we move the OpLoopMerge back to the loop header,
redirect the continue target only when the original loop was a single
block loop.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/800
Use the list from the SPIR-V registry page. Also, capture it as
a string so it's much easier to update via copy-paste.
The validator will accept modules that declare these known
extensions. However, we might not know about new tokens or
instructions declared in them. For that we need grammar updates
applied to SPIRV-Headers.
If the caller block is a single-block loop and inlining will
replace the caller block by several blocks, then:
- The original OpLoopMerge instruction will end up in the *last*
such block. That's the wrong place to put it.
- Move it back to the end of the first block.
- Update its Continue Target ID to point to the last block
We also have to take care of cases where the inlined code
begins with a structured header block. In this case
we need to ensure the restored OpLoopMerge does not appear
in the same block as the merge instruction from the callee's
first block.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/787
- DeadBranchElim: Make sure to mark orphan'd merge blocks and continue
targets as live.
- Add test with loop in dead branch
- Add test that orphan'd merge block is handled.
Fixes https://github.com/KhronosGroup/SPIRV-Tools/issues/776
Bit stream writer was manifesting incorrect behaviour when the following
two conditions were met:
- writer was on 64-bit word boundary
- WriteBits was invoked with num_bits=0 (can happen when a Huffman codec has only one
value)
The bug was causing very rare sporadic corruption which was detected by
tests after a random experimental change in MARK-V model.
Only inline calls to functions with opaque params or return
TODO: Handle parameter type or return type where the opqaue
type is buried within an array.
Includes code to deal correctly with OpFunctionParameter. This
is needed by opaque propagation which may not exhaustively inline
entry point functions.
Adds ProcessEntryPointCallTree: a method to do work on the
functions in the entry point call trees in a deterministic order.
Refactored the Huffman codec implementation and added ability to
serialize to C++-like text format. This would reduce the time-complexity
if loading hard-coded codecs.
Id descriptors are computed as a recursive hash of all instructions used
to define an id. Descriptors are invarint of actual id values and
the similar code in different files would produce the same descriptors.
Multiple ids can have the same descriptor. For example
%1 = OpConstant %u32 1
%2 = OpConstant %u32 1
would produce two ids with the same descriptor. But
%3 = OpConstant %s32 1
%4 = OpConstant %u32 2
would have descriptors different from %1 and %2.
Descriptors will be used as handles of move-to-front sequences in SPIR-V
compression.
ADCE will now generate correct code in the presence of function calls.
This is needed for opaque type optimization needed by glslang. Currently
all function calls are marked as live. TODO: mark calls live only if they
write a non-local.
- UniformElim: Only process reachable blocks
- UniformElim: Don't reuse loads of samplers and images across blocks.
Added a second phase which only reuses loads within a block for samplers
and images.
- UniformElim: Upgrade CopyObject skipping in GetPtr
- UniformElim: Add extensions whitelist
Currently disallowing SPV_KHR_variable_pointers because it doesn't
handle extended pointer forms.
- UniformElim: Do not process shaders with GroupDecorate
- UniformElim: Bail on shaders with non-32-bit ints.
- UniformElim: Document support for only single index and add TODO.
Add MultiMoveToFront class which supports multiple move-to-front
sequences and allows to promote value in all sequences at once.
Added caching for last accessed sequence handle and last accessed value
in each sequence.
Currently only SPV_KHR_variable_pointers is disallowed in passes which
do pointer analysis. Positive and negative tests of the general extensions
mechanism were added to aggressive_dce but cover all passes.
Create aggressive dead code elimination pass
This pass eliminates unused code from functions. In addition,
it detects and eliminates code which may have spurious uses but which do
not contribute to the output of the function. The most common cause of
such code sequences is summations in loops whose result is no longer used
due to dead code elimination. This optimization has additional compile
time cost over standard dead code elimination.
This pass only processes entry point functions. It also only processes
shaders with logical addressing. It currently will not process functions
with function calls. It currently only supports the GLSL.std.450 extended
instruction set. It currently does not support any extensions.
This pass will be made more effective by first running passes that remove
dead control flow and inlines function calls.
This pass can be especially useful after running Local Access Chain
Conversion, which tends to cause cycles of dead code to be left after
Store/Load elimination passes are completed. These cycles cannot be
eliminated with standard dead code elimination.
Additionally: This transform uses a whitelist of instructions that it
knows do have side effects, (a.k.a. combinators). It assumes other
instructions have side effects: it will not remove them, and assumes
they have side effects via their ID operands.
A SSA local variable load/store elimination pass.
For every entry point function, eliminate all loads and stores of function
scope variables only referenced with non-access-chain loads and stores.
Eliminate the variables as well.
The presence of access chain references and function calls can inhibit
the above optimization.
Only shader modules with logical addressing are currently processed.
Currently modules with any extensions enabled are not processed. This
is left for future work.
This pass is most effective if preceeded by Inlining and
LocalAccessChainConvert. LocalSingleStoreElim and LocalSingleBlockElim
will reduce the work that this pass has to do.