The default target is SPIR-V 1.3.
For example, spirv-as will generate a SPIR-V 1.3 binary by default.
Use command line option "--target-env spv1.0" if you want to make a SPIR-V
1.0 binary or validate against SPIR-V 1.0 rules.
Example:
# Generate a SPIR-V 1.0 binary instead of SPIR-V 1.3
spirv-as --target-env spv1.0 a.spvasm -o a.spv
spirv-as --target-env vulkan1.0 a.spvasm -o a.spv
# Validate as SPIR-V 1.0.
spirv-val --target-env spv1.0 a.spv
# Validate as Vulkan 1.0
spirv-val --target-env vulkan1.0 a.spv
The merging types we do not remove other information related to the
types. We simply leave it duplicated, and hope it is removed later.
This is what happens with decorations. They are removed in the next
phase of remove duplicates. However, for OpNames that is not the case.
We end up with two different names for the same id, which does not make
sense.
The solution is to remove the names and decorations for the type being
removed instead of rewriting them to refer to the other type.
Note that it is possible that if the first type does not have a name,
then the types will end up with no name. That is fine because the names
should not have any semantic significance anyway.
The was identified in issue #1372, but this does not fix that issue.
Some tokens are only showing up in the unified1 grammar.
So enum string mappings have to be generated from that grammar, not
the grammar from the (deprecated) include/spirv/1.2 grammar.
Example: capabilities FragmentFullyCovered, Float16ImageAMD
* Also mark function parameters as varying
* Conservatively mark assignment instructions as varying if any input is
varying after attempting to fold
* Added a test to catch this case
As per Vulkan spec, BuiltIn variables can't have Location or Component
decorations. On some drivers, these can lead to driver crashing when
compiling the shader pipeline; for example, NVidia/AMD desktop drivers:
https://github.com/KhronosGroup/glslang/issues/1182.
This change adds validation and tests to catch this.
* getFloatConstantKind() now handles OpConstantNull
* PerformOperation() now handles OpConstantNull for vectors
* Fixed some instances where we would attempt to merge a division by 0
* added tests
The algorithm used in DCEInst to remove dead code is very slow. It is
fine if you only want to remove a small number of instructions, but, if
you need to remove a large number of instructions, then the algorithm in
ADCE is much faster.
This PR removes the calls to DCEInst in the load-store removal passes
and adds a pass of ADCE afterwards.
A number of different iterations of the order of optimization, and I
believe this is the best I could find.
The results I have on 3 sets of shaders are:
Legalization:
Set 1: 5.39 -> 5.01
Set 2: 13.98 -> 8.38
Set 3: 98.00 -> 96.26
Performance passes:
Set 1: 6.90 -> 5.23
Set 2: 10.11 -> 6.62
Set 3: 253.69 -> 253.74
Size reduction passes:
Set 1: 7.16 -> 7.25
Set 2: 17.17 -> 16.81
Set 3: 112.06 -> 107.71
Note that the third set's compile time is large because of the large
number of basic blocks, not so much because of the number of
instructions. That is why we don't see much gain there.
Use indirection through latest_version_spirv.h
Also, when generating enum tables, use the unified1 JSON grammar since
it now has FragmentFullyCoveredEXT but the other JSON grammars don't.
They are starting to fall behind.
Adding basis of arithmetic merging
* Refactored constant collection in ConstantManager
* New rules:
* consecutive negates
* negate of arithmetic op with a constant
* consecutive muls
* reciprocal of div
* Removed IRContext::CanFoldFloatingPoint
* replaced by Instruction::IsFloatingPointFoldingAllowed
* Fixed some bad tests
* added some header comments
Added PerformIntegerOperation
* minor fixes to constants and tests
* fixed IntMultiplyBy1 to work with 64 bit ints
* added tests for integer mul merging
Adding test for vector integer multiply merging
Adding support for merging integer add and sub through negate
* Added tests
Adding rules to merge mult with preceding divide
* Has a couple tests, but needs more
* Added more comments
Fixed bug in integer division folding
* Will no longer merge through integer division if there would be a
remainder in the division
* Added a bunch more tests
Adding rules to merge divide and multiply through divide
* Improved comments
* Added tests
Adding rules to handle mul or div of a negation
* Added tests
Changes for review
* Early exit if no constants are involved in more functions
* fixed some comments
* removed unused declaration
* clarified some logic
Adding new rules for add and subtract
* Fold adds of adds, subtracts or negates
* Fold subtracts of adds, subtracts or negates
* Added tests
This change makes the IR builder use the type manager to generate
OpTypeInts when creating OpConstants. This avoids dangling references
being stored by the created OpConstants.
It moves all conditional branching and switch whose conditions are loop
invariant and uniform. Before performing the loop unswitch we check that
the loop does not contain any instruction that would prevent it
(barriers, group instructions etc.).
In some shaders there are a lot of very large and deeply nested
structures. This creates a lot of work for scalar replacement. Also,
since commit ca4457b we have been very aggressive as rewriting
variables. This has causes a large increase in compile time in creating
and then deleting the instructions.
To help low the costs, I want to run a cleanup of some of the easy loads
and stores to remove. This reduces the number of symbols sroa has to
work on. It also reduces the amount of code the simplifier has to
simplify because it was not generated by sroa.
To confirm the improvement, I ran numbers on three different sets of
shaders:
Time to run --legalize-hlsl:
Set #1: 55.89s -> 12.0s
Set #2: 1m44s -> 1m40.5s
Set #3: 6.8s -> 5.7s
Time to run -O
Set #1: 18.8s -> 10.9s
Set #2: 5m44s -> 4m17s
Set #3: 7.8s -> 7.8s
Contributes to #1328.
Fixes a bug at the same time. In `UpdateDefUse`, if the definition
already exists, we are not suppose to analyse it again. When you do
the entries for the definition are deleted, and we don't want that.
The check for this was wrong.
This function now checks for side-effects before adding operand
instructions to the dead instruction work list.
Because this fix puts more pressure on IsCombinatorInstruction() to
be correct, this commit adds all OpConstant* and OpType* instructions
to combinator_ops_ set.
Fixes#1341.
When inlining a function call the instructions in the same basic block
as the call get cloned. The clone is added to the set of new blocks
containing the inlined code, and the original instructions are deleted.
This PR will change this so that we simply move the instructions to the
new blocks. This saves on the creation and deletion of the
instructions.
Contributes to #1328.
This change implements instruction folding for arithmetic operations
that are redundant, specifically:
x + 0 = 0 + x = x
x - 0 = x
0 - x = -x
x * 0 = 0 * x = 0
x * 1 = 1 * x = x
0 / x = 0
x / 1 = x
mix(a, b, 0) = a
mix(a, b, 1) = b
Cache ExtInst import id in feature manager
This allows us to avoid string lookups during optimization; for now we
just cache GLSL std450 import id but I can imagine caching more sets as
they become utilized by the optimizer.
Add tests for add/sub/mul/div/mix folding
The tests cover scalar float/double cases, and some vector cases.
Since most of the code for floating point folding is shared, the tests
for vector folding are not as exhaustive as scalar.
To test sub->negate folding I had to implement a custom fixture.
Building the def-use chains is very expensive, so we do not want to
invalidate them it if is not necessary. At the moment, it seems like
most optimizatoins are good at not invalidating the def-use chains, but
simplification does.
This PR get the simlification pass to keep the analysies valid.
Contributes to #1328.
On some shader code we have in our testsuite, Phi insertion is showing
massive compile time slowdowns, particularly during destruction. The
specific shader I was looking at has about 600 variables to keep track
of and around 3200 basic blocks. The algorithm is currently O(var x
blocks), which means maps with around 2M entries. This was taking about
8 minutes of compile time.
This patch changes the tracking of stored variables to be more sparse.
Instead of having every basic block contain all the tracked variables in
the map, they now have only the variables actually stored in that block.
This speeds up deallocation, which brings down compile time to about
1m20s.
Note that this is not the definite fix for this. I will re-write Phi
insertion to use a standard SSA rewriting algorithm
(https://github.com/KhronosGroup/SPIRV-Tools/issues/893).
This contributes to
https://github.com/KhronosGroup/SPIRV-Tools/issues/1328.
I mixed up two cases when folding an OpCompositeExtract that is feed by
and OpCompositeInsert. The specific cases are demonstracted in the new
test. I mixed up the conditions for the cases, and treated one like the
other.
Fixes#1323.
* Now track propagation status and assert on bad statuses
* Added helper methods to access instruction propagation status
* Modified the phi meet operator to properly reflect the paper it is
based on
* Modified SSA edge addition so that all edge are added, but only on
state changes
* Fixed a bug in instruction simulation where interesting conditional
branches would not mark the interesting edge as executed
* Added a test to catch this bug
* Added an ostream operator for SSAPropagator::PropStatus
The simplification pass works better after all of the dead branches are
removed. So swapping them around in the legalization passes. Also
adding the simplification pass to performance passes right after dead
branch elimination.
Added CCP to the legalization passes so we can propagate the constants
into the branchs, and remove as many branches a possible. CCP is
designed to still get opportunities even if the branches are dead, so it
is a good place for it.
Fixes#1118