pack(x,y,bits) as an alias for x|(y<<bits) only existed originally to
implement it with the SLI arm64 instruction, but I've since realized
that was misguided.
I had thought the assumption on pack ("(x & (y << bits)) == 0"), i.e.
"no overlap between x and the shifted y", was enough to make using SLI
legal, but it's actually not strong enough a requirement.
The SLI docs say "...inserts the result into the corresponding vector
element in the destination SIMD&FP register such that the new zero bits
created by the shift are not inserted but retain their existing value."
The key thing not mentioned there happens with zero bits _not_ created
by the shift, the ones already present at the top of y. They're of
course inserted, overwriting any previous values.
This means SLI (and so pack()) become strictly order dependent in a way
I had never intended. This will work as you'd think,
skvm::I32 px = splat(0);
px = pack(px, r, 0);
px = pack(px, a, 24);
but this version swapping the two calls to pack() will overwrite alpha,
skvm::I32 px = splat(0);
px = pack(px, a, 24);
px = pack(px, r, 0);
I find that error-prone, so I've removed Op::pack and replaced it
with a simple expansion to x|(y<<bits). That of course works in either
order.
This new test can't JIT at head, but if we implement the other missing
instructions (soon, dependent CL) it would start failing when JIT'd.
The interpreter and x86 were both fine, since they're both doing what's
now the only approach to pack(), the simple x|(y<<bits).
I've left assembler support for SLI in case we want to try it again.
Change-Id: Iaf879309d3e1d0a458a688f3a62556e55ab05e23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/337197
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This tweak makes these conversion functions' names
match the names of the types, e.g. to_F32() makes F32.
Change-Id: I4d71c9bd17d835d09375e3343ee4316082b02889
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/319763
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
I've been thinking and rethinking and rethinking how best to use 16-bit
values like Q14 fixed-point in SkVM. Here's some ways:
A) don't... just use 32-bit values instead
B) use 16x2-bit pairs to match the narrower 32-bit lane count
C) double-pump 32-bit values to match the wider 16-bit lane count
D) use native 16- and 32-bit values and let the backends sort it out
A) is how things work today, and C) is how SkRasterPipeline's lowp mode
works. Having tried out B) and C) both for a good fair shake, they were
both already awkward to work with after writing just a few functions. I
would not give up on them entirely, but they're no longer my favorites.
D) is subtle and my new favorite. It's easiest to program with SkVM
when the values we're holding represent single values and the backend
handles any parallelism for us. That suggests we add a simple 16-bit
Q14 to the existing 32-bit I32 and F32 types, where they can be actively
converted between as normal, but not freely no-op bit punned. D) says
we people shouldn't have to choose between A-C) up front... each backend
can handle it themselves.
Under strategy D), it's entirely the backend's job to decide how to
represent each value, and how to to vectorize them. We don't need to
know as a user, and the backends can use the program itself to inform
how they vectorize. 16-bit values could live in xmm registers and
32-bit values in ymm, or the 16-bit values could go in the low half of a
ymm, or the even lanes of a ymm, or a full ymm and use two for 32-bit
values, etc. etc. This all is a backend choice, not something we should
have to know about when writing a program using Q14/I32/F32.
My next steps are to get Q14 operations tested and plumbed through the
JIT again, and to build out a blitter and a few effects using Q14 color
channels. Then, independently, we can look at each backend and how to
vectorize them. Some ideas:
1) keep running at current vectorization, with half rate 16-bit ops
2) pump up to 2x wider vectorization unconditionally to favor 16-bit
3) pump up to 2x wider vectorization only when any 16-bit op is used
These choices can be made independently for each backend (JIT, LLVM,
interp), and I wouldn't be surprised to find that we'll want to do them
differently. For instance, the interpreter is already running at 32x
vectorization... might be pumping it higher won't help anything.
Change-Id: Ib8ad2b1bf790e8c4e3acfb4818d4032f7628e8f8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/319321
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
I suspected this wouldn't work, but turns out it's fine.
Change-Id: I91042d458804f3a6af29389de5e6622a39cf23e5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317309
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
It shifts a negative number left,
which UBSAN reminds us is not allowed in C++.
Cq-Include-Trybots: luci.skia.skia.primary:Test-Debian10-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SkVM_ASAN
Change-Id: I955d53b677f605b3a0f6e2bd31b16e2cc2f64ac5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317260
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
These tests cover all the new Q14x2 ops.
As usual, left myself a few TODOs...
While working on this, I uncovered a little subtly about the blend
instructions we use to implement Op::select... unlike the platonic
(cond & true_val) | (~cond & false_val)
the hardware istructions are not bitwise! Each of the fancy blend
instructions like vblendvps or vpblendvb has a fundamental granularity
larger than a bit (4 bytes for vblendvps, 1 byte for vpblendvb). If
you're using a mask with a granularity of say, 2 bytes, you need to be
using something with equally fine granularity --- bitwise is ok,
bytewise is ok, 2-byte-wise is ok, but 4-byte-wise isn't.
Took a quick survey, and the Op::select we're using for x86 and ARM
JITs are both bytewise, so I think they're fine. Would have to think
a bit about LLVM, but these unit tests should at least fire if it were
wrong. The skvx if_then_else() I've been using in the interpreter has
been 4-byte-wise, but I'm refining that down to 1-byte-wise with
https://skia-review.googlesource.com/c/skia/+/317170.
Change-Id: I09cbc8b91cdb9e50088ee4f6ddf202faa1bf2cb1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317159
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Add a slew of 16-bit instructions for experiments.
I want to try a fixed-point path through SkVMBlitter, continuing to
represent geometry with F32, but color channels in 16 bits, with several
possible representations:
- unorm8 lowp like SkRasterPipeline (0 -> 0.0, 0x00ff -> 1.0)
- 15-bit SkFixed15 fixed-point (0 -> 0.0, 0x8000 -> 1.0)
- 14-bit signed fixed-point (0 -> 0.0, ±0x4000 -> ±1.0)
I'm leaning towards the 14-bit version for being able to hold a good
range of temporary values in [-2,2), or perhaps even a 13-bit analog for
even a little more safety range. Mostly something new to try.
Most of these instructions are pretty obvious, with notes on a few:
vpavgw is an unsigned (x+y+1)>>1, and is useful for converting
unorm8 up to Q14. There are a couple ways to do this pretty well,
and using vpavgw is the best, and uses the fewest instructions:
A) (x << 6) + ( x >> 2) + (x == 255) // Ok approx.
B) (x << 6) + ((x+1) >> 2) // Better approx.
C) vpavgw(x << 7, x >> 1) // Perfect math!
The best good reverse math I've found is (x >> 6) - (x > 16319).
vpmulhrsw is the key to the whole thing as usual, letting us do
16x16->16-bit multiplies. An SkFixed15 multiply is vpmulhrsw
followed by vpabsw (also added here), and a Q14 multiply is
vpmulhrsw followed by a simple <<1.
I've added both signed and unsigned min and max. Not entirely
sure they'll all be used, but I do have my eye on vpminuw as a
single-instruction clamp to [0,0x4000] ~~> [0.0,1.0], treating
any negative Q14 as very large unsigned.
Change-Id: I0db7f3f943ef6c9a600821444cc5b003fe5f675d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317119
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Added vpinsrd to make it easy, and fixed a comment on vpinsrb's tests.
Nothing too tricky, just the naive implementations.
The hardest part was getting all the data to the right places.
No diffs!
Change-Id: Ie4c1f1e429abfa75ca80a93d108061287d5ace80
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306872
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This noop refactor ports the lane-immediate idea to the 64-bit loads and
to 128-bit stores, and renames to simple load64, load128, store128.
The store128 part of this change is _slightly_ sneaky in that we pack
the argument pointer index and the lane both into int immz, but there's
plenty of space there: lane needs 1 bit, and the argument pointer index
needs maybe 3 or 4 max.
Change-Id: I3fa01bf31312b8a69c7e287d649470ba15a8ea40
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306810
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This allows us to JIT functions taking one more argument.
Our x86 JIT was the weak link: LLVM handles arbitrary numbers
of arguments, and our aarch64 JIT handles up to seven already.
I plan to pass one more source varying sometimes, and in extremely
unusual cases it we could need six pointers:
1) uniforms
2) destination
3) 8-bit coverage plane
4) 8-bit multiply plane
5) 8-bit add plane
6) source varying
Those varyings 3-5 are all indexable off the coverage pointer 3) if we
know the dimensions, so I could be convinced that we should only pass
one there, making our maximum number of arguments four. I'll be looking
at that independently, but it doesn't hurt to have our capability to go
up to six either way.
Change-Id: Id7a5b88e382a95bb560633e95c5be273b7ea67d1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306241
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
The 3 bits that represent rsp are used as a signaling mechanism to
choose between compact base + disp encoding and full base + disp +
scale*index encoding, one byte longer. If a base is encoded as rsp in
the MOD R/M byte, an SIB byte follows.
r12 shares its 3 bottom bits with rsp, so we need to treat it like rsp
when deciding whether or not we need an SIB byte. (As usual registers
r8-15's distinguishing upper bit is carried by a REX/VEX prefix.)
rsp is also treated as a special signal in the index field of the SIB
byte, meaning essentially scale=0, and as a result it's not possible to
use rsp as an index. It _is_ possible to use r12 as an index, with a
test added here. This worked without any code change. It seems the
index=rsp -> scale=0 signal is triggered by all four bits of the index
registger, including the X upper bit from REX/VEX.
I have found these charts useful:
https://wiki.osdev.org/X86-64_Instruction_Encoding#32.2F64-bit_addressing_2
Looking at those charts I noticed that rbp/r13 are also special cases,
so I'm eyeing them warily and will avoid using them for now.
Change-Id: Id78f826a39c060b03000eae7c50c642ef44d57db
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306237
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
vpermps (added here) makes this very easy,
with an index controlling what 32-bit values go where.
A index of the form {0,2,4,6|?,?,?,?} will put the 4 low 32-bit halves
of 4 64-bit values in lanes 0,1,2,3. We can use that twice to get all 8
low halves, then our new vperm2f128 to put them together. Conveniently
vpermps can also load directly from memory:
vpermps (%rdi), {0,2,4,6|?,?,?,?}, lo
vpermps 32(%rdi), {0,2,4,6|?,?,?,?}, hi
vperm2f128 0x20, lo,hi, dst
We don't care what those top four indices are for load64_lo, so we'll
use them as the indices for load64_hi. That makes the full index
{0,2,4,6|1,3,5,7}, and load64_hi will just vpermf128 the other 128-bits
of lo/hi:
vpermps (%rdi), {?,?,?,?|1,3,5,7}, lo
vpermps 32(%rdi), {?,?,?,?|1,3,5,7}, hi
vperm2f128 0x31, lo,hi, dst
vpermps needs its index in a register, so we use a temporary for that.
Our logical lo can alias dst, and hi can alias that index, so it's just
one extra temporary register in the end.
Change-Id: Ie6a4efbf12ddada45dd09c0f580fa7350cf3019e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/305171
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Add some packing instructions to make it possible.
The gist is that we've got
r(x) = {a,b,c,d|e,f,g,h}
r(y) = {i,j,k,l|m,n,o,p}
where r(x) holds each low 32-bit half of a 64-bit value,
and r(y) holds the high halves. We want to write
a,i,b,j,c,k,d,l,e,m,...
So first the vpunpck[lh]dq instructions produce
L = {a,i,b,j|e,m,f,n}
H = {c,k,d,l|g,o,h,p}
which gets us halfway there. The vperm2f128s select the low (0x20) or
high (0x31) 128-bit halves of L/H, so we end up writing to memory
dst+0: a,i,b,j,c,k,d,l
dst+32: e,m,f,n,g,o,h,p
Existing tests cover that store64 works.
Change-Id: Ic00ad9bdb448b79867584c27cf0114a42ed32379
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/305156
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Three TODOs, all basically the same idea: divide-by-zero is not the only
way to produce non-finite results from a division. You can also divide
by very-near-zero, and maybe some other ways.
Added is_finite() to make this clear. is_finite() is almost as cheap as
the comparisons it replaces, so performance shouldn't be affected.
Change-Id: I0a803e9ab4e3286f4e10a13d3aacee370eaaa803
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/304669
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This adds load/store ops for 64-bit values, with two load64 instructions
returning the low and high 32-bits each, and store64 taking both.
These are implemented in the interpreter and tested but not yet JIT'd
or hooked up for loading and storing 64-bit PixelFormats. Hopefully
those two CLs to follow shortly.
Change-Id: I7e5fc3f0ee5a421adc9fb355d0b6b661f424b505
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/303380
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
- add f32<->f16 functions to skvx
- add f32<->f16 x86 instructions to skvm::Assembler
- add f32<->f16 ops to skvm,
using the skvx functions in the interpreter
Still TODO:
use the new x86 instructions in the JIT
(For now like in many other ways, the aarch64 JIT
continues to languish. Will pick that back up one day.)
Change-Id: Ib8dc1ccdc75ecb23769ea4947d66d3ab22520f23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/302942
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This is just minor little stuff I've been meaning to do,
with essentially no impact anywhere.
- Add an easy-to-flip switch to disable the JIT.
- Stop checking so carefully whether we hasJIT()
in test_jit_and_interpreter(). This was helpful
for making progress but now just gets in the way.
Change-Id: I08065ba1f42700f9d7d63f8303af357ec5fe11ae
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/302944
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Just a little follow up, adding a mem->xmm vmovups
instruction to make it possible. Nothing tricky.
Change-Id: I319e11839e44ccda46e664c82fb858a18499f9be
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/299883
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
It's just a shortcut for
Assembler::Label l;
a->label(&l);
and it never really took off.
It's easier to work on Label without it.
Change-Id: I4a060f78f235ac3fcc87b996f5d9404ffba43c53
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/288997
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
On Windows each time the test is run one gets
sk_fopen: fopen("resources\SkVMTest.expected", "wb") returned nullptr (errno:22): Invalid argument
due to 'expected' holding a read lock on the file when trying to open
the same file for writing. Windows locks files aggressively and in this
case there is no good reason to keep the read access when trying to
truncate and write to this file path.
This also changes the logic to only update the file in the error case
when the content would actually change. Previously it seems this file
would be re-written (usually with the same content) every time this test
ran.
Change-Id: I9c96f1e7e0692e57326fec351c7353c423014c9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/287381
Commit-Queue: Ben Wagner <bungeman@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
Rename apply() to unary(), then add binary().
Fix unary to calculate N=base-inst+1.
Convert to simpler `auto&& fn` mode by renaming
approx_atan(y,x) to approx_atan2. Now we can pass
functions, lambdas, non-lambda functors, whatever.
Change-Id: I17a6aa137f224edc0accd0509c5023a30980fe39
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/286900
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Mike Reed <reed@google.com>
We're getting this wrong today, and likely also these several
other instructions. We need to account for the immediate byte
that follows the ip-relative offset!
Add imm_byte_after_operand() to take care of this.
Change-Id: If0f4359b0a8e9d769bfde0d8456726e82f798123
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/285237
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This is a reland of 1283d55f35
... this time, also checking for HSW feature set.
Original change's description:
> Reland "gather8/16 JIT support"
>
> This is a reland of 54659e51bc
>
> ... now expecting not to JIT when under ASAN/MSAN.
>
> Original change's description:
> > gather8/16 JIT support
> >
> > The basic strategy is one at a time, inserting 8- or 16-bit values
> > into an Xmm register, then expanding to 32-bit in a Ymm at the end
> > using vpmovzx{b,w}d instructions.
> >
> > Somewhat annoyingly we can only pull indices from an Xmm register,
> > so we grab the first four then shift down the top before the rest.
> >
> > Added a unit test to get coverage where the indices are reused and
> > not consumed directly by the gather instruction. It's an important
> > case, needing to find another register for accum that can't just be
> > dst(), but there's no natural coverage of that anywhere.
> >
> > Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> > Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> > Reviewed-by: Herb Derby <herb@google.com>
> > Commit-Queue: Mike Klein <mtklein@google.com>
>
> Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>
Change-Id: Id0e53ab67f7a70fe42dccca1d9912b07ec11b54d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284504
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This reverts commit 1283d55f35.
Reason for revert: one more try...
Original change's description:
> Reland "gather8/16 JIT support"
>
> This is a reland of 54659e51bc
>
> ... now expecting not to JIT when under ASAN/MSAN.
>
> Original change's description:
> > gather8/16 JIT support
> >
> > The basic strategy is one at a time, inserting 8- or 16-bit values
> > into an Xmm register, then expanding to 32-bit in a Ymm at the end
> > using vpmovzx{b,w}d instructions.
> >
> > Somewhat annoyingly we can only pull indices from an Xmm register,
> > so we grab the first four then shift down the top before the rest.
> >
> > Added a unit test to get coverage where the indices are reused and
> > not consumed directly by the gather instruction. It's an important
> > case, needing to find another register for accum that can't just be
> > dst(), but there's no natural coverage of that anywhere.
> >
> > Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> > Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> > Reviewed-by: Herb Derby <herb@google.com>
> > Commit-Queue: Mike Klein <mtklein@google.com>
>
> Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>
TBR=mtklein@google.com,herb@google.com
Change-Id: I953fcd2aef308fd901880618fa540ac9f6d88e84
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284503
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This is a reland of 54659e51bc
... now expecting not to JIT when under ASAN/MSAN.
Original change's description:
> gather8/16 JIT support
>
> The basic strategy is one at a time, inserting 8- or 16-bit values
> into an Xmm register, then expanding to 32-bit in a Ymm at the end
> using vpmovzx{b,w}d instructions.
>
> Somewhat annoyingly we can only pull indices from an Xmm register,
> so we grab the first four then shift down the top before the rest.
>
> Added a unit test to get coverage where the indices are reused and
> not consumed directly by the gather instruction. It's an important
> case, needing to find another register for accum that can't just be
> dst(), but there's no natural coverage of that anywhere.
>
> Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>
Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This reverts commit 54659e51bc.
Reason for revert: ASAN
Original change's description:
> gather8/16 JIT support
>
> The basic strategy is one at a time, inserting 8- or 16-bit values
> into an Xmm register, then expanding to 32-bit in a Ymm at the end
> using vpmovzx{b,w}d instructions.
>
> Somewhat annoyingly we can only pull indices from an Xmm register,
> so we grab the first four then shift down the top before the rest.
>
> Added a unit test to get coverage where the indices are reused and
> not consumed directly by the gather instruction. It's an important
> case, needing to find another register for accum that can't just be
> dst(), but there's no natural coverage of that anywhere.
>
> Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>
TBR=mtklein@google.com,herb@google.com
Change-Id: I912273e6ffc9258537ba806951a5964be0218d58
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284471
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
The basic strategy is one at a time, inserting 8- or 16-bit values
into an Xmm register, then expanding to 32-bit in a Ymm at the end
using vpmovzx{b,w}d instructions.
Somewhat annoyingly we can only pull indices from an Xmm register,
so we grab the first four then shift down the top before the rest.
Added a unit test to get coverage where the indices are reused and
not consumed directly by the gather instruction. It's an important
case, needing to find another register for accum that can't just be
dst(), but there's no natural coverage of that anywhere.
Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Funnel the ARM instructions through two op() helpers, one for 3-arg
vector instructions, another for all others (0,1,2 arg, optional imm).
More consistent use of (immN & N_mask) to make things clearer.
Add missing imm12 offset to load and store instructions, with tests.
Notice they're in element counts, so we can go up to 4096 16-byte stack
entries, not 256 entries like you might think.
Change-Id: I99a3ad30b7b0926f93da671f00d89759934e65b4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284255
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Going to be easier to work on stack/register things if we
don't have to keep thinking of aarch64 as a special case.
This just sets up the frames, will follow up with JITMode::Stack.
Change-Id: Ic0df4c5deb9c7d55eb73a62e4b6b1c9919996974
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284243
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Move all the non-vector instructions together,
and convert them to use Operand where possible.
In general that can be any of
- (Operand, imm)
- (Operand, GP64)
- (GP64, Operand)
and that means there are two ways to encode (GP64,GP64)
instructions, so there's a disambiguator added.
Our measure of sucess is eliminating calls to rex()
except from our one helper, and so far, so good.
I haven't seen a need for Label Operands yet, and they're
only useful as (GP64, Operand) style arguments (can't
really be destinations in read-only memory) but we could
add support pretty easily if we find the need.
Tweak one test to avoid int/pointer ambiguity about 0.
Changed some of the instructions to always use a REX
prefix just to make it easier to funnel everything
through one place. movzbl -> movzbq, etc.
Change-Id: I606f94e76e0ef8f491409f23748f5c8dcb607491
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284023
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Rename YmmOperand to Operand, focusing on that side of things for
now. And delete unused GP64Operand... might not need to return.
Big refactor around W and L bits and the helper op() functions.
Lots more is now funneled through a single core op() function.
Support Xmm and GP64 (direct moves) as Operands too.
As a rule of thumb I measured my progress by counting vex() calls.
Ideally we call it only in that centralized op().
I think I got as close as we can get, with only vgatherdps calling vex()
itself. Given its weird encoding, there's no good way to work
vgatherdps into the abstraction. It's close to Mem{base,0,index,scale},
but the index is a Ymm register, and there isn't any corresponding
special cases for it like there is normally for rsp in SIB.
Change-Id: I48e4583293e1df386a18d37ad54197016ce13251
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283806
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This replaces most vmovups variants with two: load to register from
flexible operand, or store from register to flexible operand.
And upgrade the zero-extending loads too to finish off load_store().
More to come in small steps.
Change-Id: I80645f264ee91662260046c8e0a45ba6d1bf98c6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283753
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This introduces Mem, a way of expressing x86 addressing:
addr = base reg + offset imm + (scale imm * index reg)
using the usual x86 convention of index = rsp to indicate no index.
And then, this introduces GP64Operand and YmmOperand, which are
generalizations like YmmOrLabel that fold over all the types of
arguments available at that position. (YmmOperand replaces YmmOrLabel).
There's still much to do, but I've started by generalizing most
of the Ymm instructions to take YmmOperand, and added some new
unit tests for vmovdqa to make sure all the various modes work.
Change-Id: Ie6cc1186310ff39c52a2a061431a91d10816c98a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283344
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
I propose landing this, but then pause on extending math functions until
we develop a clearer migration story for sksl.
Change-Id: Id42ec37071da058e6e7809abe1ed0570d48df8e7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283229
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
These are neat but mostly just a distraction for now.
I've left all the assembly in place and unit tested
to make putting these back easy when we want to.
Change-Id: Id2bd05eca363baf9c4e31125ee79e722ded54cb7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283307
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
One new instruction movzwl needed.
Change-Id: Ic70ba34d667eb6d570aeca88c4243e0c3309525f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283305
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
- let skvm tell us if FMAs are supported
- unguard previously LLVM-only tests
- simplify testing JIT and interpreter
We're getting close enough to always being able to JIT that carefully
marking what JITs and what doesn't is more annoying than helpful.
Now just test the JIT if present, and always test the interpreter.
Change-Id: I83762b38e0773ccaee795ae0fc9907e86628d73e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283275
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Starting on atan2, but there is a lot of quadrant clean-up, so will
do that in a separate CL.
Change-Id: Ie1e70051a6ecb19a2e521b56ed09796e8e745276
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283016
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Change-Id: I22c120db2535929bd20df3068cca1aecc57ae746
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282744
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Tests min() / max() float behavior fairly exhaustively.
We sometimes specialize into min_f32_imm and max_f32_imm, so it's
important to test with constant values as each argument to cover that
specialization, and to test with both as non-constant values to cover
when that specialization does not apply.
Change-Id: Ib021fd5a6d322058af2f504048b9ed02d0510732
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282315
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Change-Id: I90f12cb305ff8daf64b07e5f47bb3a158df95bee
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282120
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
bit_clear is at least useful as a special case for select(),
which helps with code readability.
Add is_NaN() and use these all together in sweep gradient.
Change-Id: I57a54f8956f85e0db0662b33f8446b8dc7342d8d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281685
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
- new this-> convention: never use it when calling common public
Builder methods like splat(), bit_and(), etc like you'd see in
normal user code, but always use it when calling private methods
like this->push(), this->isImm(), this->allImm().
- use c++17 if-statements to scope this->allImm() variables tighter.
- check for x.id == y.id cases where applicable, including a tweak
to min() and max() to make them able to hit the special case.
- add special cases for I32 +,-,*, and remove an old unimportant
unit test that assumed we didn't fold these.
- add special cases for select(), and use select() in a few more
places where it's clearer and now just as efficient.
Change-Id: Idaac9250ac5a95a48d33eeba1cc4380c8c91629d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281678
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
bit_clear() is just another bit_and(),
and bytes() is a way of expression pshufb
that we never really use (yet).
Can always add them back later, but there's
some extra complexity to think about for each
that I'd like to not think about now:
- common sub-expression elimination between bit_and and bit_clear
- large constant management JIT'ing bytes
Change-Id: I3a54afa963231fec1d5de949acc647e3430ed0d8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281557
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
It's clearer to see the flow of data this way and to read each pass'
implementation without all the pointer indirection, and move semantics
should let this be just as efficient.
Change-Id: I1ac211fbe54bec37de6d126eec0c211573c2a568
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281218
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This finishes up the main refactoring.
Still some follow-ups I want to try.
I got tired of typing usage.users(id) so I converted that to operator[],
which I think is clear for now. If we add more methods that don't refer
to the users, we can undo?
Change-Id: I0ac563cfb1899f7a3f8b2cb6d50ca1646dd05071
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281216
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Seems nicer to keep encapsulated in a program->program pass
so nothing upstream of it has to think about liveness.
I will be circling back around to profiling the cost of these
tempoaries, copies, etc. I just want to start writing them as
if cost were no object first.
Change-Id: I1d1187b521fbe963e720e0d8de90316a549f7797
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281182
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Instead of copying fIndex and marching it forward,
we can tick down our existing uses counts backward,
saving one temporary std::vector.
Our implementation does guarantee the Instructions
returned by users() are sorted, so let's lean into
that... that means we can find the death time of any
instruction simply by looking at users().back()
(if there are any, of course).
Everything else is names and formatting, the biggest
being renaming Uses -> Usage. There's enough mention
of "users" and "uses" contrasting with each other that
I think it makes sense for the type to have the nice
middle-ground neutral name Usage, reflecting the arrow
and not which way we're thinking about it pointing.
Change-Id: I32ea9af6eb6430a162bee6da4810a599e8ed0dfd
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281003
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>