Commit Graph

2154 Commits

Author SHA1 Message Date
Hans-Kristian Arntzen
9436cd3036 MSL: Deal with array copies from and to threadgroup. 2019-08-27 13:18:01 +02:00
Hans-Kristian Arntzen
1017a02aad
Merge pull request #1133 from KhronosGroup/fix-1115
Deal with ldexp taking uint input.
2019-08-27 13:17:43 +02:00
Hans-Kristian Arntzen
b198b15b27
Merge pull request #1131 from KhronosGroup/fix-1114
Remove unnecessary continue block statements
2019-08-27 13:17:31 +02:00
Hans-Kristian Arntzen
2f7848dcda Deal with ldexp taking uint input.
Need to value cast to int first.
2019-08-27 11:19:54 +02:00
Hans-Kristian Arntzen
5d97dae1eb Move branchless analysis to CFG.
Traverse backwards instead, far more robust. Should elide basically all
redundant continue; statements now.
2019-08-27 10:19:19 +02:00
Hans-Kristian Arntzen
55c2ca90ae Elide branches to continue block when continue block is also a merge. 2019-08-27 10:19:01 +02:00
Hans-Kristian Arntzen
903ef0e40a
Merge pull request #1130 from KhronosGroup/fix-1112
Deal correctly with sign on bitfield operations.
2019-08-26 16:23:00 +02:00
Hans-Kristian Arntzen
cf95dc2ef7
Merge pull request #1129 from KhronosGroup/fix-1110
Fix variable scope when switch block exits multiple times.
2019-08-26 11:39:11 +02:00
Hans-Kristian Arntzen
b3305799a8 Deal correctly with sign on bitfield operations.
Need a lot of special purpose implementation functions for these.
2019-08-26 11:36:36 +02:00
Hans-Kristian Arntzen
e3d4dddfec Fix variable scope when switch block exits multiple times.
Inner scope can still dominate here, so we need to be conservative when
we observe switch blocks specifically. Normal selection merges cannot
merge from multiple paths.
2019-08-26 10:05:43 +02:00
Hans-Kristian Arntzen
4ce04480ec
Merge pull request #1111 from KhronosGroup/fix-1108
Fix severe performance issue with invariant expression invalidation.
2019-08-01 10:01:15 +02:00
Hans-Kristian Arntzen
b97e9b0499 Fix severe performance issue with invariant expression invalidation.
We were going down a tree of expressions multiple times and this caused
an exponential explosion in time, which was not caught until recently.

Fix this by blocking any traversal going through an ID more than one
time.

This fix overall improves performance by almost an order of magnitude on a
particular test shader rather than slowing it down by ~75x.
2019-08-01 09:55:21 +02:00
Hans-Kristian Arntzen
ffca8735ff
Merge pull request #1105 from cdavis5e/msl-unify-as
MSL: Unify the get_*_address_space() methods.
2019-07-29 10:19:12 +02:00
Chip Davis
df18d98bea MSL: Unify the get_*_address_space() methods.
These methods have largely the same logic, with minor differences. That
I felt compelled to duplicate the logic into another method was one of
the things that bothered me about the variable pointers change. This
cleans that part of the code up; now we don't have two places to change.
2019-07-26 09:43:28 -05:00
Hans-Kristian Arntzen
d378413040
Merge pull request #1103 from KhronosGroup/fix-1100
MSL: Cleanup temporary use with emit_uninitialized_temporary.
2019-07-26 14:35:18 +02:00
Hans-Kristian Arntzen
87513f9ac0
Merge pull request #1102 from KhronosGroup/fix-1096
MSL: Deal with Modf/Frexp where output is access chain to scalar.
2019-07-26 14:28:16 +02:00
Hans-Kristian Arntzen
0630a8533c
Merge pull request #1101 from KhronosGroup/fix-1095
Do not force temporary unless continue-only for loop dominates.
2019-07-26 14:27:13 +02:00
Hans-Kristian Arntzen
c3e8e728d8 MSL: Cleanup temporary use with emit_uninitialized_temporary. 2019-07-26 11:16:43 +02:00
Hans-Kristian Arntzen
abb345d0b3 MSL: Deal with Modf/Frexp where output is access chain to scalar.
This is not allowed as we cannot take mutable reference to a
vec.{x,y,z,w}. We only care about scalar since entire vectors are fine.
2019-07-26 11:02:38 +02:00
Hans-Kristian Arntzen
d620f1dd26 Do not force temporary unless continue-only for loop dominates.
We would force temporaries in unexpected places, causing assertions to
throw if access chains were consumed in such loops.
2019-07-26 10:39:05 +02:00
Hans-Kristian Arntzen
301eab1b7a
Merge pull request #1099 from KhronosGroup/fix-1091
Missed case where DoWhile continue block deals with Phi.
2019-07-25 17:44:17 +02:00
Hans-Kristian Arntzen
798282d303
Merge pull request #1098 from KhronosGroup/fix-1090
Vulkan GLSL: Support disabling samplerless texture function EXT.
2019-07-25 16:10:26 +02:00
Hans-Kristian Arntzen
e06efb7259 Missed case where DoWhile continue block deals with Phi. 2019-07-25 12:30:50 +02:00
Hans-Kristian Arntzen
12ca9d1982 Vulkan GLSL: Support disabling samplerless texture function EXT.
Some platforms support Vulkan GLSL, but not this extension apparently
...
2019-07-25 11:07:14 +02:00
Hans-Kristian Arntzen
78fccc4d5c Merge branch 'msl-dispatch-base' 2019-07-25 10:32:14 +02:00
Hans-Kristian Arntzen
3c03b55c46 Workaround MSVC 2013 compiler issues. 2019-07-25 10:28:11 +02:00
Hans-Kristian Arntzen
35fc810a0c Merge branch 'msl-dispatch-base' of git://github.com/cdavis5e/SPIRV-Cross into msl-dispatch-base 2019-07-25 10:26:44 +02:00
Chip Davis
fb5ee4cb5c MSL: Adjust BuiltInWorkgroupId for vkCmdDispatchBase().
This command allows the caller to set the base value of
`BuiltInWorkgroupId`, and thus of `BuiltInGlobalInvocationId`. Metal
provides no direct support for this... but it does provide a builtin,
`[[grid_origin]]`, normally used to pass the base values for the stage
input region, which we will now abuse to pass the dispatch base and
avoid burning a buffer binding.

`[[grid_origin]]`, as part of Metal's support for compute stage input,
requires MSL 1.2. For 1.0 and 1.1, we're forced to provide a buffer.

(Curiously, this builtin was undocumented until the MSL 2.2 release. Go
figure.)
2019-07-24 08:56:15 -05:00
Hans-Kristian Arntzen
07bb1a53e0
Merge pull request #1089 from KhronosGroup/msl-packing-refactor
MSL: Refactor buffer packing logic from ground up.
2019-07-24 15:35:00 +02:00
Hans-Kristian Arntzen
d90eeddcf1 Fix some typos in comments. 2019-07-24 12:14:19 +02:00
Hans-Kristian Arntzen
c62503bca7 Do not attempt to pack types which are already scalar. 2019-07-24 11:52:28 +02:00
Hans-Kristian Arntzen
4bc8729c0e HLSL query lod cleanups. 2019-07-24 11:34:28 +02:00
Hans-Kristian Arntzen
461f1506e7 Do not eagerly invalidate all active variables on a branch.
This is not necessary, as we must emit an invalidating store before we
potentially consume an invalid expression. In fact, we're a bit
conservative here in this case for example:

int tmp = variable;
if (...)
{
    variable = 10;
}
else
{
    // Consuming tmp here is fine, but it was
    // invalidated while emitting other branch.
    // Technically, we need to study if there is an invalidating store
    // in the CFG between the loading block and this block, and the other
    // branch will not be a part of that analysis.
    int tmp2 = tmp * tmp;
}

Fixing this case means complex CFG traversal *everywhere*, and it feels like overkill.

Fixing this exposed a bug with access chains, so fix a bug where expression dependencies were not
inherited properly in access chains. Access chains are now considered forwarded if there
is at least one dependency which is also forwarded.
2019-07-24 11:17:30 +02:00
Hans-Kristian Arntzen
18bcc9b790 Do not disable temporary forwarding when we suppress usage tracking.
This subtle bug removed any expression validation for trivially swizzled
variables. Make usage suppression a more explicit concept rather than
just hacking off forwarded_temporaries.

There is some fallout here with loop generation since our expression
invalidation is currently a bit too naive to handle loops properly.
The forwarding bug masked this problem until now.

If part of the loop condition is also used in the body, we end up
reading an invalid expression, which in turn forces a temporary to be
generated in the condition block, not good. We'll need to be smarter
here ...
2019-07-23 19:18:44 +02:00
Hans-Kristian Arntzen
8ba0507a6d Add another test for unpacking without load forwarding. 2019-07-23 17:14:59 +02:00
Hans-Kristian Arntzen
1ece67a050 Look at pointee type when unpacking expressions.
We might be unpacking in OpLoad, so don't want any pointer types from
access chains creeping in.
2019-07-23 17:07:15 +02:00
Hans-Kristian Arntzen
646e04294a Fix some warnings when building in MoltenVK. 2019-07-23 16:39:13 +02:00
Hans-Kristian Arntzen
ebe109d91d Deal correctly with non-forwarded packed loads.
Need to unpack the expression if we're not forwarding.
2019-07-23 16:25:19 +02:00
Hans-Kristian Arntzen
79f533b662 Test CompositeInsert/Extract/VectorShuffle on packed vectors. 2019-07-23 15:44:35 +02:00
Hans-Kristian Arntzen
5582145549 Add test for array of scalar struct. 2019-07-23 15:30:03 +02:00
Hans-Kristian Arntzen
5c1cb7accf Recursively pack struct types when we find scalar packed structs. 2019-07-23 15:24:53 +02:00
Hans-Kristian Arntzen
3fa2b14634 Run format_all.sh. 2019-07-23 12:23:41 +02:00
Hans-Kristian Arntzen
ef1fa71bba Unpack vector expression in Matrix-Vector multiplies. 2019-07-23 12:22:40 +02:00
Hans-Kristian Arntzen
0f10601f27 Test matrix multiplies in more complex scenarios. 2019-07-23 12:12:24 +02:00
Hans-Kristian Arntzen
978253c804 Test implicit packing of struct members. 2019-07-23 12:04:15 +02:00
Hans-Kristian Arntzen
46e757b278 GLSL/HLSL: Verify member alignment for explicit offset as well. 2019-07-23 11:53:33 +02:00
Hans-Kristian Arntzen
fc741596d4 Add tests for struct padding and self-alignment. 2019-07-23 11:46:34 +02:00
Hans-Kristian Arntzen
7277c7ac46 Use to_unpacked_row_major_expression to unify row-major in MSL/GLSL. 2019-07-23 11:36:54 +02:00
Hans-Kristian Arntzen
47a18b9f1b Simplify row-major matrix/vector multiplies. 2019-07-23 10:56:57 +02:00
Hans-Kristian Arntzen
d584d833fa Test array of std140 vectors. 2019-07-23 10:38:32 +02:00