Commit Graph

222 Commits

Author SHA1 Message Date
Mike Klein
42d67a67f5 remove "half" term from SkVM
fp16 is a more precise name, given that there are things like bfloat16,
and this may free up the word "half" for the same sort of more nebulous
format as we use it in SkSL.

Change-Id: I55c39f3670f2c300b9306c92a86c4ec7a2e7b5d7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/339577
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-12-01 17:08:32 +00:00
Mike Klein
8d78da910e more arm64 ops
- easy: ceil, floor, sqrt
  - index is our first arm64 instruction to need a temporary,
    but other than that is pretty simple, just N - iota as usual.

With Op::index now supported, `viewer  --slide GM_runtime_shader`
frame time drops from ~1ms to ~0.24ms.

I accidentally swapped in a float-subtract for an int-subtract and
everything worked fine.  o_O

Change-Id: I44c51506a6a9014b398d6943bb0e3712e4e52445
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338661
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-11-25 22:49:07 +00:00
Mike Klein
8b16beeaf4 remove SkVM uniform8/16
Uniforms in practice are always pointers or 32-bit ints or floats, so
these are essentially dead code. The change to SkVMBlitter.cpp is the
only interesting change, and I think it makes more sense now than
before.  The program will need float coverage in the end, so might as
well feed it one directly.

Change-Id: I7f1e77731cf10ccc35595012a6df4f9e54a0dad8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338631
Commit-Queue: Mike Reed <reed@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-11-25 21:44:47 +00:00
Mike Klein
8212fa5f84 remove SkVM Q14 support
Now that I've been reminded that half-float compute is real and no
longer just a dream, Q14 kind of pales in comparison, and just gets in
my way when working on SkVM.

As usual I've left in assembler support and unit tests for those
instructions.  The instructions are all pretty easy to keep working and
tested and don't get in the way, unlike the real "let's do Q14" stuff.

None of this Q14 code was hooked up to anything but unit tests, so no
capability lost here, and no diffs.  As always, it'll be easy to restore
should we ever want to by looking at this CL.

Change-Id: Ia42a96652b381267a7c3ec563b5978efcfc717a7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338630
Auto-Submit: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-11-25 21:08:17 +00:00
Mike Klein
b76f884220 clean up unused SkVM debug features
I'm not using any of these, so nice to move them aside.

Change-Id: Id43c1606c2f9e6bba0d8f6bd7d2f8f5e02d5b762
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338629
Auto-Submit: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-11-25 21:06:47 +00:00
Mike Klein
dbc19ea84e implement a whole slew of arm64 ops
This adds assembler support for a bunch of ARM instructions and uses
them to implement a bunch of SkVM ops.  No diffs.

movs() seems strictly more useful than fmovs(), so I've replaced it.

Change-Id: Ied38a44461653598269421b0b56bef4eb19bb1e9
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/335918
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-11-24 16:02:56 +00:00
Mike Klein
ee40ec6dd6 remove Op::pack
pack(x,y,bits) as an alias for x|(y<<bits) only existed originally to
implement it with the SLI arm64 instruction, but I've since realized
that was misguided.

I had thought the assumption on pack ("(x & (y << bits)) == 0"), i.e.
"no overlap between x and the shifted y", was enough to make using SLI
legal, but it's actually not strong enough a requirement.

The SLI docs say "...inserts the result into the corresponding vector
element in the destination SIMD&FP register such that the new zero bits
created by the shift are not inserted but retain their existing value."
The key thing not mentioned there happens with zero bits _not_ created
by the shift, the ones already present at the top of y.  They're of
course inserted, overwriting any previous values.

This means SLI (and so pack()) become strictly order dependent in a way
I had never intended.  This will work as you'd think,

    skvm::I32 px = splat(0);
    px = pack(px, r,  0);
    px = pack(px, a, 24);

but this version swapping the two calls to pack() will overwrite alpha,

    skvm::I32 px = splat(0);
    px = pack(px, a, 24);
    px = pack(px, r,  0);

I find that error-prone, so I've removed Op::pack and replaced it
with a simple expansion to x|(y<<bits).  That of course works in either
order.

This new test can't JIT at head, but if we implement the other missing
instructions (soon, dependent CL) it would start failing when JIT'd.
The interpreter and x86 were both fine, since they're both doing what's
now the only approach to pack(), the simple x|(y<<bits).

I've left assembler support for SLI in case we want to try it again.

Change-Id: Iaf879309d3e1d0a458a688f3a62556e55ab05e23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/337197
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-11-21 12:38:19 +00:00
Mike Klein
cac130ffc1 to_foo() -> to_Foo()
This tweak makes these conversion functions' names
match the names of the types, e.g. to_F32() makes F32.

Change-Id: I4d71c9bd17d835d09375e3343ee4316082b02889
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/319763
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-09-25 21:31:45 +00:00
Mike Klein
6b72d3a03b q14 rethink
I've been thinking and rethinking and rethinking how best to use 16-bit
values like Q14 fixed-point in SkVM.  Here's some ways:

   A) don't... just use 32-bit values instead
   B) use 16x2-bit pairs to match the narrower 32-bit lane count
   C) double-pump 32-bit values to match the wider 16-bit lane count
   D) use native 16- and 32-bit values and let the backends sort it out

A) is how things work today, and C) is how SkRasterPipeline's lowp mode
works.  Having tried out B) and C) both for a good fair shake, they were
both already awkward to work with after writing just a few functions.  I
would not give up on them entirely, but they're no longer my favorites.

D) is subtle and my new favorite.  It's easiest to program with SkVM
when the values we're holding represent single values and the backend
handles any parallelism for us.  That suggests we add a simple 16-bit
Q14 to the existing 32-bit I32 and F32 types, where they can be actively
converted between as normal, but not freely no-op bit punned.  D) says
we people shouldn't have to choose between A-C) up front... each backend
can handle it themselves.

Under strategy D), it's entirely the backend's job to decide how to
represent each value, and how to to vectorize them.  We don't need to
know as a user, and the backends can use the program itself to inform
how they vectorize.  16-bit values could live in xmm registers and
32-bit values in ymm, or the 16-bit values could go in the low half of a
ymm, or the even lanes of a ymm, or a full ymm and use two for 32-bit
values, etc. etc.  This all is a backend choice, not something we should
have to know about when writing a program using Q14/I32/F32.

My next steps are to get Q14 operations tested and plumbed through the
JIT again, and to build out a blitter and a few effects using Q14 color
channels.  Then, independently, we can look at each backend and how to
vectorize them.  Some ideas:

    1) keep running at current vectorization, with half rate 16-bit ops
    2) pump up to 2x wider vectorization unconditionally to favor 16-bit
    3) pump up to 2x wider vectorization only when any 16-bit op is used

These choices can be made independently for each backend (JIT, LLVM,
interp), and I wouldn't be surprised to find that we'll want to do them
differently.  For instance, the interpreter is already running at 32x
vectorization... might be pumping it higher won't help anything.

Change-Id: Ib8ad2b1bf790e8c4e3acfb4818d4032f7628e8f8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/319321
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-09-24 20:02:58 +00:00
Mike Klein
63620fdf63 add another q14x2 test
I suspected this wouldn't work, but turns out it's fine.

Change-Id: I91042d458804f3a6af29389de5e6622a39cf23e5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317309
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-09-16 16:17:47 +00:00
Mike Klein
d9c55ce8ed disable show-your-work bit of test
It shifts a negative number left,
which UBSAN reminds us is not allowed in C++.

Cq-Include-Trybots: luci.skia.skia.primary:Test-Debian10-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SkVM_ASAN
Change-Id: I955d53b677f605b3a0f6e2bd31b16e2cc2f64ac5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317260
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-09-16 01:14:51 +00:00
Mike Klein
9791e50e3d impl q14x2 ops in interpreter
These tests cover all the new Q14x2 ops.
As usual, left myself a few TODOs...

While working on this, I uncovered a little subtly about the blend
instructions we use to implement Op::select... unlike the platonic

    (cond & true_val) | (~cond & false_val)

the hardware istructions are not bitwise!  Each of the fancy blend
instructions like vblendvps or vpblendvb has a fundamental granularity
larger than a bit (4 bytes for vblendvps, 1 byte for vpblendvb).  If
you're using a mask with a granularity of say, 2 bytes, you need to be
using something with equally fine granularity --- bitwise is ok,
bytewise is ok, 2-byte-wise is ok, but 4-byte-wise isn't.

Took a quick survey, and the Op::select we're using for x86 and ARM
JITs are both bytewise, so I think they're fine.  Would have to think
a bit about LLVM, but these unit tests should at least fire if it were
wrong.  The skvx if_then_else() I've been using in the interpreter has
been 4-byte-wise, but I'm refining that down to 1-byte-wise with
https://skia-review.googlesource.com/c/skia/+/317170.

Change-Id: I09cbc8b91cdb9e50088ee4f6ddf202faa1bf2cb1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317159
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-09-15 21:55:31 +00:00
Mike Klein
84dd8f9912 implement some useful 16-bit instructions
Add a slew of 16-bit instructions for experiments.

I want to try a fixed-point path through SkVMBlitter, continuing to
represent geometry with F32, but color channels in 16 bits, with several
possible representations:

    - unorm8 lowp like SkRasterPipeline (0 -> 0.0,  0x00ff ->  1.0)
    - 15-bit SkFixed15 fixed-point      (0 -> 0.0,  0x8000 ->  1.0)
    - 14-bit signed fixed-point         (0 -> 0.0, ±0x4000 -> ±1.0)

I'm leaning towards the 14-bit version for being able to hold a good
range of temporary values in [-2,2), or perhaps even a 13-bit analog for
even a little more safety range.  Mostly something new to try.

Most of these instructions are pretty obvious, with notes on a few:

    vpavgw is an unsigned (x+y+1)>>1, and is useful for converting
    unorm8 up to Q14.  There are a couple ways to do this pretty well,
    and using vpavgw is the best, and uses the fewest instructions:

         A) (x << 6) + ( x    >> 2) + (x == 255)   // Ok approx.
         B) (x << 6) + ((x+1) >> 2)                // Better approx.
         C) vpavgw(x << 7, x >> 1)                 // Perfect math!

    The best good reverse math I've found is (x >> 6) - (x > 16319).

    vpmulhrsw is the key to the whole thing as usual, letting us do
    16x16->16-bit multiplies.  An SkFixed15 multiply is vpmulhrsw
    followed by vpabsw (also added here), and a Q14 multiply is
    vpmulhrsw followed by a simple <<1.

    I've added both signed and unsigned min and max.  Not entirely
    sure they'll all be used, but I do have my eye on vpminuw as a
    single-instruction clamp to [0,0x4000] ~~> [0.0,1.0], treating
    any negative Q14 as very large unsigned.

Change-Id: I0db7f3f943ef6c9a600821444cc5b003fe5f675d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317119
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-09-15 15:09:52 +00:00
Mike Klein
4ecc970162 jit load128/store128
Added vpinsrd to make it easy, and fixed a comment on vpinsrb's tests.

Nothing too tricky, just the naive implementations.
The hardest part was getting all the data to the right places.

No diffs!

Change-Id: Ie4c1f1e429abfa75ca80a93d108061287d5ace80
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306872
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-07-30 18:16:10 +00:00
Mike Klein
3136789972 refactor large load/stores
This noop refactor ports the lane-immediate idea to the 64-bit loads and
to 128-bit stores, and renames to simple load64, load128, store128.

The store128 part of this change is _slightly_ sneaky in that we pack
the argument pointer index and the lane both into int immz, but there's
plenty of space there: lane needs 1 bit, and the argument pointer index
needs maybe 3 or 4 max.

Change-Id: I3fa01bf31312b8a69c7e287d649470ba15a8ea40
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306810
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-07-30 14:40:53 +00:00
Mike Klein
0cfd503996 JIT functions taking six pointers
This allows us to JIT functions taking one more argument.
Our x86 JIT was the weak link: LLVM handles arbitrary numbers
of arguments, and our aarch64 JIT handles up to seven already.

I plan to pass one more source varying sometimes, and in extremely
unusual cases it we could need six pointers:

    1) uniforms
    2) destination
    3) 8-bit coverage plane
    4) 8-bit multiply plane
    5) 8-bit add plane
    6) source varying

Those varyings 3-5 are all indexable off the coverage pointer 3) if we
know the dimensions, so I could be convinced that we should only pass
one there, making our maximum number of arguments four.  I'll be looking
at that independently, but it doesn't hurt to have our capability to go
up to six either way.

Change-Id: Id7a5b88e382a95bb560633e95c5be273b7ea67d1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306241
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-07-28 19:49:12 +00:00
Mike Klein
68d075e18b fix r12
The 3 bits that represent rsp are used as a signaling mechanism to
choose between compact base + disp encoding and full base + disp +
scale*index encoding, one byte longer.  If a base is encoded as rsp in
the MOD R/M byte, an SIB byte follows.

r12 shares its 3 bottom bits with rsp, so we need to treat it like rsp
when deciding whether or not we need an SIB byte.  (As usual registers
r8-15's distinguishing upper bit is carried by a REX/VEX prefix.)

rsp is also treated as a special signal in the index field of the SIB
byte, meaning essentially scale=0, and as a result it's not possible to
use rsp as an index.  It _is_ possible to use r12 as an index, with a
test added here.  This worked without any code change.  It seems the
index=rsp -> scale=0 signal is triggered by all four bits of the index
registger, including the X upper bit from REX/VEX.

I have found these charts useful:
https://wiki.osdev.org/X86-64_Instruction_Encoding#32.2F64-bit_addressing_2

Looking at those charts I noticed that rbp/r13 are also special cases,
so I'm eyeing them warily and will avoid using them for now.

Change-Id: Id78f826a39c060b03000eae7c50c642ef44d57db
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306237
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-07-28 19:46:17 +00:00
Mike Klein
184f601346 JIT load64_{lo,hi}
vpermps (added here) makes this very easy,
with an index controlling what 32-bit values go where.

A index of the form {0,2,4,6|?,?,?,?} will put the 4 low 32-bit halves
of 4 64-bit values in lanes 0,1,2,3.  We can use that twice to get all 8
low halves, then our new vperm2f128 to put them together.  Conveniently
vpermps can also load directly from memory:

    vpermps   (%rdi), {0,2,4,6|?,?,?,?}, lo
    vpermps 32(%rdi), {0,2,4,6|?,?,?,?}, hi
    vperm2f128 0x20, lo,hi, dst

We don't care what those top four indices are for load64_lo, so we'll
use them as the indices for load64_hi. That makes the full index
{0,2,4,6|1,3,5,7}, and load64_hi will just vpermf128 the other 128-bits
of lo/hi:

    vpermps   (%rdi), {?,?,?,?|1,3,5,7}, lo
    vpermps 32(%rdi), {?,?,?,?|1,3,5,7}, hi
    vperm2f128 0x31, lo,hi, dst

vpermps needs its index in a register, so we use a temporary for that.
Our logical lo can alias dst, and hi can alias that index, so it's just
one extra temporary register in the end.

Change-Id: Ie6a4efbf12ddada45dd09c0f580fa7350cf3019e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/305171
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-07-22 19:33:00 +00:00
Mike Klein
d8194dcc3e JIT store64
Add some packing instructions to make it possible.

The gist is that we've got

    r(x) = {a,b,c,d|e,f,g,h}
    r(y) = {i,j,k,l|m,n,o,p}

where r(x) holds each low 32-bit half of a 64-bit value,
and r(y) holds the high halves.  We want to write

    a,i,b,j,c,k,d,l,e,m,...

So first the vpunpck[lh]dq instructions produce

    L = {a,i,b,j|e,m,f,n}
    H = {c,k,d,l|g,o,h,p}

which gets us halfway there.  The vperm2f128s select the low (0x20) or
high (0x31) 128-bit halves of L/H, so we end up writing to memory

    dst+0:  a,i,b,j,c,k,d,l
    dst+32: e,m,f,n,g,o,h,p

Existing tests cover that store64 works.

Change-Id: Ic00ad9bdb448b79867584c27cf0114a42ed32379
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/305156
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-07-22 16:54:55 +00:00
Mike Klein
e942b8cac6 some small SkVM TODOs
Three TODOs, all basically the same idea: divide-by-zero is not the only
way to produce non-finite results from a division.  You can also divide
by very-near-zero, and maybe some other ways.

Added is_finite() to make this clear.  is_finite() is almost as cheap as
the comparisons it replaces, so performance shouldn't be affected.

Change-Id: I0a803e9ab4e3286f4e10a13d3aacee370eaaa803
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/304669
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-07-21 16:30:37 +00:00
Mike Klein
6732da0b21 add 64-bit load/store ops
This adds load/store ops for 64-bit values, with two load64 instructions
returning the low and high 32-bits each, and store64 taking both.

These are implemented in the interpreter and tested but not yet JIT'd
or hooked up for loading and storing 64-bit PixelFormats.  Hopefully
those two CLs to follow shortly.

Change-Id: I7e5fc3f0ee5a421adc9fb355d0b6b661f424b505
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/303380
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-07-17 21:26:53 +00:00
Mike Klein
4d680cdf07 a bunch of half-related stuff
- add f32<->f16 functions to skvx
  - add f32<->f16 x86 instructions to skvm::Assembler
  - add f32<->f16 ops to skvm,
    using the skvx functions in the interpreter

Still TODO:
    use the new x86 instructions in the JIT

(For now like in many other ways, the aarch64 JIT
continues to languish.  Will pick that back up one day.)

Change-Id: Ib8dc1ccdc75ecb23769ea4947d66d3ab22520f23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/302942
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-07-15 20:47:31 +00:00
Mike Klein
b2290d7a8b minor skvm cleanup
This is just minor little stuff I've been meaning to do,
with essentially no impact anywhere.

   - Add an easy-to-flip switch to disable the JIT.

   - Stop checking so carefully whether we hasJIT()
     in test_jit_and_interpreter().  This was helpful
     for making progress but now just gets in the way.

Change-Id: I08065ba1f42700f9d7d63f8303af357ec5fe11ae
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/302944
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-07-15 17:10:03 +00:00
Mike Klein
400ba22f45 save only xmm6-15 on windows
Just a little follow up, adding a mem->xmm vmovups
instruction to make it possible.  Nothing tricky.

Change-Id: I319e11839e44ccda46e664c82fb858a18499f9be
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/299883
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-07-01 02:58:22 +00:00
Adlai Holler
684838f1f5 Mark SkStringPrintf as SK_PRINTF_LIKE
Change-Id: I3d2ee8dca1d2e962794ce8c3c391779bff357f0c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/288762
Commit-Queue: Adlai Holler <adlai@google.com>
Reviewed-by: Brian Salomon <bsalomon@google.com>
Auto-Submit: Adlai Holler <adlai@google.com>
2020-05-12 15:22:14 +00:00
Mike Klein
c74db7998b remove Assembler::here()
It's just a shortcut for

   Assembler::Label l;
   a->label(&l);

and it never really took off.
It's easier to work on Label without it.

Change-Id: I4a060f78f235ac3fcc87b996f5d9404ffba43c53
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/288997
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-05-11 17:42:40 +00:00
Ben Wagner
e8ffb088c1 Close file before writing in SkVMTest.
On Windows each time the test is run one gets

sk_fopen: fopen("resources\SkVMTest.expected", "wb") returned nullptr (errno:22): Invalid argument

due to 'expected' holding a read lock on the file when trying to open
the same file for writing. Windows locks files aggressively and in this
case there is no good reason to keep the read access when trying to
truncate and write to this file path.

This also changes the logic to only update the file in the error case
when the content would actually change. Previously it seems this file
would be re-written (usually with the same content) every time this test
ran.

Change-Id: I9c96f1e7e0692e57326fec351c7353c423014c9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/287381
Commit-Queue: Ben Wagner <bungeman@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-05-05 15:10:09 +00:00
Mike Klein
962020fe39 simplify binary ops too
Rename apply() to unary(), then add binary().

Fix unary to calculate N=base-inst+1.

Convert to simpler `auto&& fn` mode by renaming
approx_atan(y,x) to approx_atan2.  Now we can pass
functions, lambdas, non-lambda functors, whatever.

Change-Id: I17a6aa137f224edc0accd0509c5023a30980fe39
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/286900
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-05-01 20:43:09 +00:00
Mike Klein
48c51bb51e test vcmpeqps with label offset
We're getting this wrong today, and likely also these several
other instructions.  We need to account for the immediate byte
that follows the ip-relative offset!

Add imm_byte_after_operand() to take care of this.

Change-Id: If0f4359b0a8e9d769bfde0d8456726e82f798123
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/285237
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-24 13:22:45 +00:00
Mike Klein
21e85eb549 Reland "Reland "gather8/16 JIT support""
This is a reland of 1283d55f35

... this time, also checking for HSW feature set.

Original change's description:
> Reland "gather8/16 JIT support"
>
> This is a reland of 54659e51bc
>
> ... now expecting not to JIT when under ASAN/MSAN.
>
> Original change's description:
> > gather8/16 JIT support
> >
> > The basic strategy is one at a time, inserting 8- or 16-bit values
> > into an Xmm register, then expanding to 32-bit in a Ymm at the end
> > using vpmovzx{b,w}d instructions.
> >
> > Somewhat annoyingly we can only pull indices from an Xmm register,
> > so we grab the first four then shift down the top before the rest.
> >
> > Added a unit test to get coverage where the indices are reused and
> > not consumed directly by the gather instruction.  It's an important
> > case, needing to find another register for accum that can't just be
> > dst(), but there's no natural coverage of that anywhere.
> >
> > Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> > Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> > Reviewed-by: Herb Derby <herb@google.com>
> > Commit-Queue: Mike Klein <mtklein@google.com>
>
> Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

Change-Id: Id0e53ab67f7a70fe42dccca1d9912b07ec11b54d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284504
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-20 20:25:06 +00:00
Mike Klein
7d5342d9ac Revert "Reland "gather8/16 JIT support""
This reverts commit 1283d55f35.

Reason for revert: one more try...

Original change's description:
> Reland "gather8/16 JIT support"
> 
> This is a reland of 54659e51bc
> 
> ... now expecting not to JIT when under ASAN/MSAN.
> 
> Original change's description:
> > gather8/16 JIT support
> >
> > The basic strategy is one at a time, inserting 8- or 16-bit values
> > into an Xmm register, then expanding to 32-bit in a Ymm at the end
> > using vpmovzx{b,w}d instructions.
> >
> > Somewhat annoyingly we can only pull indices from an Xmm register,
> > so we grab the first four then shift down the top before the rest.
> >
> > Added a unit test to get coverage where the indices are reused and
> > not consumed directly by the gather instruction.  It's an important
> > case, needing to find another register for accum that can't just be
> > dst(), but there's no natural coverage of that anywhere.
> >
> > Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> > Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> > Reviewed-by: Herb Derby <herb@google.com>
> > Commit-Queue: Mike Klein <mtklein@google.com>
> 
> Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com,herb@google.com

Change-Id: I953fcd2aef308fd901880618fa540ac9f6d88e84
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284503
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-20 17:05:52 +00:00
Mike Klein
1283d55f35 Reland "gather8/16 JIT support"
This is a reland of 54659e51bc

... now expecting not to JIT when under ASAN/MSAN.

Original change's description:
> gather8/16 JIT support
>
> The basic strategy is one at a time, inserting 8- or 16-bit values
> into an Xmm register, then expanding to 32-bit in a Ymm at the end
> using vpmovzx{b,w}d instructions.
>
> Somewhat annoyingly we can only pull indices from an Xmm register,
> so we grab the first four then shift down the top before the rest.
>
> Added a unit test to get coverage where the indices are reused and
> not consumed directly by the gather instruction.  It's an important
> case, needing to find another register for accum that can't just be
> dst(), but there's no natural coverage of that anywhere.
>
> Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-20 16:47:04 +00:00
Mike Klein
86975cd168 Revert "gather8/16 JIT support"
This reverts commit 54659e51bc.

Reason for revert: ASAN

Original change's description:
> gather8/16 JIT support
> 
> The basic strategy is one at a time, inserting 8- or 16-bit values
> into an Xmm register, then expanding to 32-bit in a Ymm at the end
> using vpmovzx{b,w}d instructions.
> 
> Somewhat annoyingly we can only pull indices from an Xmm register,
> so we grab the first four then shift down the top before the rest.
> 
> Added a unit test to get coverage where the indices are reused and
> not consumed directly by the gather instruction.  It's an important
> case, needing to find another register for accum that can't just be
> dst(), but there's no natural coverage of that anywhere.
> 
> Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com,herb@google.com

Change-Id: I912273e6ffc9258537ba806951a5964be0218d58
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284471
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-20 15:27:49 +00:00
Mike Klein
54659e51bc gather8/16 JIT support
The basic strategy is one at a time, inserting 8- or 16-bit values
into an Xmm register, then expanding to 32-bit in a Ymm at the end
using vpmovzx{b,w}d instructions.

Somewhat annoyingly we can only pull indices from an Xmm register,
so we grab the first four then shift down the top before the rest.

Added a unit test to get coverage where the indices are reused and
not consumed directly by the gather instruction.  It's an important
case, needing to find another register for accum that can't just be
dst(), but there's no natural coverage of that anywhere.

Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-20 15:19:37 +00:00
Mike Klein
b8e041e5f5 refactor arm instructions
Funnel the ARM instructions through two op() helpers, one for 3-arg
vector instructions, another for all others (0,1,2 arg, optional imm).

More consistent use of (immN & N_mask) to make things clearer.

Add missing imm12 offset to load and store instructions, with tests.
Notice they're in element counts, so we can go up to 4096 16-byte stack
entries, not 256 entries like you might think.

Change-Id: I99a3ad30b7b0926f93da671f00d89759934e65b4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284255
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-17 17:12:13 +00:00
Mike Klein
48e782486f set up stack frames on aarch64
Going to be easier to work on stack/register things if we
don't have to keep thinking of aarch64 as a special case.

This just sets up the frames, will follow up with JITMode::Stack.

Change-Id: Ic0df4c5deb9c7d55eb73a62e4b6b1c9919996974
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284243
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-17 16:44:53 +00:00
Mike Klein
c15c936c3e GP64 Operand conversion
Move all the non-vector instructions together,
and convert them to use Operand where possible.

In general that can be any of
   - (Operand, imm)
   - (Operand, GP64)
   - (GP64, Operand)
and that means there are two ways to encode (GP64,GP64)
instructions, so there's a disambiguator added.

Our measure of sucess is eliminating calls to rex()
except from our one helper, and so far, so good.

I haven't seen a need for Label Operands yet, and they're
only useful as (GP64, Operand) style arguments (can't
really be destinations in read-only memory) but we could
add support pretty easily if we find the need.

Tweak one test to avoid int/pointer ambiguity about 0.

Changed some of the instructions to always use a REX
prefix just to make it easier to funnel everything
through one place.  movzbl -> movzbq, etc.

Change-Id: I606f94e76e0ef8f491409f23748f5c8dcb607491
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284023
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-16 19:39:11 +00:00
Mike Klein
8390f2ead6 lots more refactoring
Rename YmmOperand to Operand, focusing on that side of things for
now.  And delete unused GP64Operand... might not need to return.

Big refactor around W and L bits and the helper op() functions.
Lots more is now funneled through a single core op() function.

Support Xmm and GP64 (direct moves) as Operands too.

As a rule of thumb I measured my progress by counting vex() calls.
Ideally we call it only in that centralized op().

I think I got as close as we can get, with only vgatherdps calling vex()
itself.  Given its weird encoding, there's no good way to work
vgatherdps into the abstraction.  It's close to Mem{base,0,index,scale},
but the index is a Ymm register, and there isn't any corresponding
special cases for it like there is normally for rsp in SIB.

Change-Id: I48e4583293e1df386a18d37ad54197016ce13251
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283806
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-04-16 16:55:32 +00:00
Mike Klein
edc2dacb3a convert load_store / stack_load_store to new style
This replaces most vmovups variants with two: load to register from
flexible operand, or store from register to flexible operand.

And upgrade the zero-extending loads too to finish off load_store().

More to come in small steps.

Change-Id: I80645f264ee91662260046c8e0a45ba6d1bf98c6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283753
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-15 22:54:46 +00:00
Mike Klein
9bb88673f5 start on operand refactoring
This introduces Mem, a way of expressing x86 addressing:

   addr = base reg + offset imm + (scale imm * index reg)

using the usual x86 convention of index = rsp to indicate no index.

And then, this introduces GP64Operand and YmmOperand, which are
generalizations like YmmOrLabel that fold over all the types of
arguments available at that position.  (YmmOperand replaces YmmOrLabel).

There's still much to do, but I've started by generalizing most
of the Ymm instructions to take YmmOperand, and added some new
unit tests for vmovdqa to make sure all the various modes work.

Change-Id: Ie6cc1186310ff39c52a2a061431a91d10816c98a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283344
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-15 20:41:09 +00:00
Mike Reed
1b84ef2b50 [skvm] approx_atan2
I propose landing this, but then pause on extending math functions until
we develop a clearer migration story for sksl.

Change-Id: Id42ec37071da058e6e7809abe1ed0570d48df8e7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283229
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-04-13 22:31:40 +00:00
Mike Klein
45d9cc86b3 remove i16x2 ops
These are neat but mostly just a distraction for now.
I've left all the assembly in place and unit tested
to make putting these back easy when we want to.

Change-Id: Id2bd05eca363baf9c4e31125ee79e722ded54cb7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283307
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-04-13 19:08:11 +00:00
Mike Klein
cb5110443f impl uniform16
One new instruction movzwl needed.

Change-Id: Ic70ba34d667eb6d570aeca88c4243e0c3309525f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283305
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-13 19:04:11 +00:00
Mike Klein
10fc1e66a9 skvm unit test cleanup
- let skvm tell us if FMAs are supported
  - unguard previously LLVM-only tests
  - simplify testing JIT and interpreter

We're getting close enough to always being able to JIT that carefully
marking what JITs and what doesn't is more annoying than helpful.
Now just test the JIT if present, and always test the interpreter.

Change-Id: I83762b38e0773ccaee795ae0fc9907e86628d73e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283275
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-04-13 18:48:01 +00:00
Mike Reed
d468a1619a [skvm] approx_[asin,acos,atan]
Starting on atan2, but there is a lot of quadrant clean-up, so will
do that in a separate CL.

Change-Id: Ie1e70051a6ecb19a2e521b56ed09796e8e745276
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283016
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-04-13 12:17:02 +00:00
Mike Reed
801ba0d606 approx_tan for skvm
Change-Id: I22c120db2535929bd20df3068cca1aecc57ae746
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282744
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-04-10 17:09:17 +00:00
Mike Klein
210288fdcd add SkVM_min_max unit test
Tests min() / max() float behavior fairly exhaustively.

We sometimes specialize into min_f32_imm and max_f32_imm, so it's
important to test with constant values as each argument to cover that
specialization, and to test with both as non-constant values to cover
when that specialization does not apply.

Change-Id: Ib021fd5a6d322058af2f504048b9ed02d0510732
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282315
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-04-08 19:05:36 +00:00
Mike Klein
5e9f0ee13f add and test stack load/store
Change-Id: Ie0d29e31bd8c156ecd46cd658b5a4c53d8d2e11d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282115
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-07 21:15:47 +00:00
Mike Reed
82ff25e02c approximate sine for skvm
Change-Id: I90f12cb305ff8daf64b07e5f47bb3a158df95bee
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282120
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-04-07 18:32:17 +00:00
Mike Klein
4067a9429a the return of bit_clear
bit_clear is at least useful as a special case for select(),
which helps with code readability.

Add is_NaN() and use these all together in sweep gradient.

Change-Id: I57a54f8956f85e0db0662b33f8446b8dc7342d8d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281685
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-05 16:44:16 +00:00
Mike Klein
aa68109a59 special-case overhaul
- new this-> convention: never use it when calling common public
    Builder methods like splat(), bit_and(), etc like you'd see in
    normal user code, but always use it when calling private methods
    like this->push(), this->isImm(), this->allImm().

  - use c++17 if-statements to scope this->allImm() variables tighter.

  - check for x.id == y.id cases where applicable, including a tweak
    to min() and max() to make them able to hit the special case.

  - add special cases for I32 +,-,*, and remove an old unimportant
    unit test that assumed we didn't fold these.

  - add special cases for select(), and use select() in a few more
    places where it's clearer and now just as efficient.

Change-Id: Idaac9250ac5a95a48d33eeba1cc4380c8c91629d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281678
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-05 15:41:36 +00:00
Mike Klein
cca2acfb77 remove little-used bit_clear() and bytes()
bit_clear() is just another bit_and(),
and bytes() is a way of expression pshufb
that we never really use (yet).

Can always add them back later, but there's
some extra complexity to think about for each
that I'd like to not think about now:

  - common sub-expression elimination between bit_and and bit_clear
  - large constant management JIT'ing bytes

Change-Id: I3a54afa963231fec1d5de949acc647e3430ed0d8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281557
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-03 18:16:54 +00:00
Mike Klein
5b701e127c pass around programs by value
It's clearer to see the flow of data this way and to read each pass'
implementation without all the pointer indirection, and move semantics
should let this be just as efficient.

Change-Id: I1ac211fbe54bec37de6d126eec0c211573c2a568
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281218
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-02 16:44:06 +00:00
Mike Klein
b7d87903df pull out schedule, finalize
This finishes up the main refactoring.
Still some follow-ups I want to try.

I got tired of typing usage.users(id) so I converted that to operator[],
which I think is clear for now.  If we add more methods that don't refer
to the users, we can undo?

Change-Id: I0ac563cfb1899f7a3f8b2cb6d50ca1646dd05071
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281216
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-02 16:37:29 +00:00
Mike Klein
7542ab5e5b reframe liveness_analysis as eliminate_dead_code
Seems nicer to keep encapsulated in a program->program pass
so nothing upstream of it has to think about liveness.

I will be circling back around to profiling the cost of these
tempoaries, copies, etc.  I just want to start writing them as
if cost were no object first.

Change-Id: I1d1187b521fbe963e720e0d8de90316a549f7797
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281182
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-02 14:44:36 +00:00
Mike Klein
4f90a8e399 Uses refactoring
Instead of copying fIndex and marching it forward,
we can tick down our existing uses counts backward,
saving one temporary std::vector.

Our implementation does guarantee the Instructions
returned by users() are sorted, so let's lean into
that... that means we can find the death time of any
instruction simply by looking at users().back()
(if there are any, of course).

Everything else is names and formatting, the biggest
being renaming Uses -> Usage.  There's enough mention
of "users" and "uses" contrasting with each other that
I think it makes sense for the type to have the nice
middle-ground neutral name Usage, reflecting the arrow
and not which way we're thinking about it pointing.

Change-Id: I32ea9af6eb6430a162bee6da4810a599e8ed0dfd
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281003
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-04-01 18:18:13 +00:00
Herb Derby
f20400e2a1 Introduce Liveness and Uses into existing scheduler
Liveness tracks all the live instructions in the instruction stream.
Uses maps this value to instructions that use it.

Uses is overkill for the current schedule, but will be needed for
spilling.

Change-Id: Id20b7b7a90901e156d323bb612c5908f91405967
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/277744
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-31 17:53:07 +00:00
Mike Reed
bcb46c06c7 Add approx_pow/log2/pow2 to SkVM builder
... in support of programs for colorspacexforms

Change-Id: I72ace09f10511ef8994038a4af3feab8bc1a299e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/278466
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-23 22:17:04 +00:00
Mike Reed
f5ff4c25a0 add loadF() and storeF() helpers to Builder
Change-Id: I5fe1ca090868cfb9fa930753232e9cbb42737d9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/278473
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-03-23 19:25:43 +00:00
Mike Klein
5caf7dee25 restore Op::round
While I think trunc(mad(x, scale, 0.5)) is fine for doing our float
to fixed point conversions, round(mul(x, scale)) was kind of better
all around:

   - better rounding than +0.5 and trunc
   - faster when mad() is not an fma
   - often now no need to use the constant 0.5f or have it in a register
   - allows the mul() in to_unorm to use mul_f32_imm

Those last two points are key... this actually frees up 2 registers in
the x86 JIT when using to_unorm().

So I think maybe we can resurrect round and still guarantee our desired
intra-machine stability by committing to using instructions that follow
the current rounding mode, which is what [v]cvtps2dq inextricably uses.

Left some notes on the ARM impl... we're rounding to nearest even there,
which is probably the current mode anyway, but to be more correct we
need a slightly longer impl that rounds float->float then "truncates".
Unsure whether it matters in practice.  Same deal in the unit test that
I added back, now testing negative and 0.5 cases too. The expectations
assume the current mode is nearest even.

I had the idea to resurrect this when I was looking at adding _imm Ops
for fma_f32.  I noticed that the y and z arguments to an fma_f32 were by
far most likely to be constants, and when they are, they're by far likely
to both be constants, e.g. 255.0f & 0.5f from to_unorm(8,...).

llvm disassembly for SkVM_round unit test looks good:

~ $ llc -mcpu=haswell /tmp/skvm-jit-1231521224.bc -o -
	.section	__TEXT,__text,regular,pure_instructions
	.macosx_version_min 10, 15
	.globl	"_skvm-jit-1231521224"  ## -- Begin function skvm-jit-1231521224
	.p2align	4, 0x90
"_skvm-jit-1231521224":                 ## @skvm-jit-1231521224
	.cfi_startproc
	cmpl	$8, %edi
	jl	LBB0_3
	.p2align	4, 0x90
LBB0_2:                                 ## %loopK
                                        ## =>This Inner Loop Header: Depth=1
	vcvtps2dq	(%rsi), %ymm0
	vmovupd	%ymm0, (%rdx)
	addl	$-8, %edi
	addq	$32, %rsi
	addq	$32, %rdx
	cmpl	$8, %edi
	jge	LBB0_2
LBB0_3:                                 ## %hoist1
	xorl	%eax, %eax
	testl	%edi, %edi
	jle	LBB0_6
	.p2align	4, 0x90
LBB0_5:                                 ## %loop1
                                        ## =>This Inner Loop Header: Depth=1
	vcvtss2si	(%rsi,%rax), %ecx
	movl	%ecx, (%rdx,%rax)
	decl	%edi
	addq	$4, %rax
	testl	%edi, %edi
	jg	LBB0_5
LBB0_6:                                 ## %leave
	vzeroupper
	retq
	.cfi_endproc
                                        ## -- End function

Change-Id: Ib59eb3fd8a6805397850d93226c6c6d37cc3ab84
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/276738
Auto-Submit: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-03-12 21:10:34 +00:00
Mike Klein
7c0332cd35 re-enable fnma
- hook up fmls.4s as fnma_f32
 - add fneg.4s
 - use fneg.4s + fmls.4s to impl fms_f32
 - more tests to exercise these

Change-Id: I60173a5e4618ab968a9361e15334a1d63c001372
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/275412
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-03-05 21:58:07 +00:00
Mike Klein
238105b50c skip dump checks on machines w/o FMAs
They'll never see fma_f32 ops.

Change-Id: I39371606c673fb76bdcbbe08c1b25308675f8f2c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/275151
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-04 23:21:02 +00:00
Mike Klein
cb50b117e3 get rid of troublesome Op::round
We really only need to_unorm(),
and that's fine with trunc(mad(x, scale, 0.5)).

Change-Id: I1561c678501963a9ae53c22994fc906159fc7199
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/275075
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-04 22:26:01 +00:00
Herb Derby
c02a41f0e6 SkVM implement min max
Change-Id: I225e8f7395e58a4ca3c1c151d8711796e6a56939
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274185
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-02-28 21:26:15 +00:00
Mike Klein
5a8404c93e sqrt
- test sqrt
  - impl. sqrt for llvm

Change-Id: I38a06ee57bf4d50e7d068321ab765ede3d1d73bc
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274183
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-02-28 20:55:06 +00:00
Mike Klein
14548b94c3 index
- test index
  - impl. index in llvm
  - convert to loop counter from uint64_t -> int32_t
    to match how we use it in other backends

Change-Id: Iee371d67eddaace068906b861292eb5ed3d74c95
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274135
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-02-28 20:33:57 +00:00
Herb Derby
fb4ff8d6cc SkVM round test
Change-Id: I4226393275a11be3babe21b7f8461767c5b55f23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274127
Commit-Queue: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-02-28 20:11:06 +00:00
Mike Klein
e96207a9e3 i16x2 ops
SE(val) -> S(dst_type, val) to make this work.

Change-Id: Icf42f706b2e7761db8ce83f1e1ef95c288bfecf4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274120
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-02-28 20:03:46 +00:00
Mike Klein
22c007da34 select + stores
This is enough for another swath of tests.

Change-Id: Ida43fa2ee2ebd8e6086923fb9fafef8f646d0a93
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274074
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
2020-02-28 19:32:42 +00:00
Herb Derby
5c5bd1a637 Add comparisons (eq|neq|gt|gte)(i32|f32)
Change-Id: Ic53758162507d769548953001bd761e84d717322
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274064
Reviewed-by: Mike Klein <mtklein@google.com>
2020-02-28 17:17:45 +00:00
Mike Klein
11efa18eca impl load32
This means we can write a memset32 (load32 -> store32),
tested explicitly with the new unit test.

Slightly changes to the type protocol,
  - load and splat now generate scalars or vectors
    depending on how `scalar` is set
  - store should no longer have to pay attention to `scalar`;
    it's input values will already be the right size

Clean up some of the type declarations where we don't
actually need the subclass types, holding llvm::Type* instead.
This makes using ?: easier.

Change-Id: I2f98701ebdeead0513d355b2666b024794b90193
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/273781
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-27 18:50:05 +00:00
Mike Klein
7b3999edcb convert to phi nodes
Convert our n+args stack homes to phi nodes,
essentially performing mem2reg ourselves,
eliminating the need for it at runtime.

Also, use b.getInt64(k) to create integer constants.

Also, print verifyModule() errors to stdout (instead of nowhere).

Also, update unit test to make sure we don't run off the end.

Bitcode still looks good:
    define void @skvm-jit-211960346(i64, i8*) {
    enter:
      br label %testK

    testK:                                            ; preds = %loopK, %enter
      %2 = phi i64 [ %0, %enter ], [ %6, %loopK ]
      %3 = phi i8* [ %1, %enter ], [ %7, %loopK ]
      %4 = icmp uge i64 %2, 16
      br i1 %4, label %loopK, label %test1

    loopK:                                            ; preds = %testK
      %5 = bitcast i8* %3 to <16 x i32>*
      store <16 x i32> <i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42>, <16 x i32>* %5, align 1
      %6 = sub i64 %2, 16
      %7 = getelementptr i8, i8* %3, i64 64
      br label %testK

    test1:                                            ; preds = %loop1, %testK
      %8 = phi i64 [ %2, %testK ], [ %12, %loop1 ]
      %9 = phi i8* [ %3, %testK ], [ %13, %loop1 ]
      %10 = icmp uge i64 %8, 1
      br i1 %10, label %loop1, label %leave

    loop1:                                            ; preds = %test1
      %11 = bitcast i8* %9 to i32*
      store i32 42, i32* %11, align 1
      %12 = sub i64 %8, 1
      %13 = getelementptr i8, i8* %9, i64 4
      br label %test1

    leave:                                            ; preds = %test1
      ret void
    }

and the final assembly looks the same:

    0x10a3f5000: movabsq $0x10a3f6000, %rax        ; imm = 0x10A3F6000
    0x10a3f500a: vbroadcastss (%rax), %zmm0
    0x10a3f5010: cmpq   $0xf, %rdi
    0x10a3f5014: jbe    0x10a3f504d
    0x10a3f5016: nopw   %cs:(%rax,%rax)
    0x10a3f5020: vmovups %zmm0, (%rsi)
    0x10a3f5026: addq   $-0x10, %rdi
    0x10a3f502a: addq   $0x40, %rsi
    0x10a3f502e: cmpq   $0xf, %rdi
    0x10a3f5032: ja     0x10a3f5020
    0x10a3f5034: jmp    0x10a3f504d
    0x10a3f5036: nopw   %cs:(%rax,%rax)
    0x10a3f5040: movl   $0x2a, (%rsi)
    0x10a3f5046: decq   %rdi
    0x10a3f5049: addq   $0x4, %rsi
    0x10a3f504d: testq  %rdi, %rdi
    0x10a3f5050: jne    0x10a3f5040
    0x10a3f5052: vzeroupper
    0x10a3f5055: retq

Change-Id: I12d11c7d5786c4c3df28a49bb3044be10f0770e0
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/273753
Reviewed-by: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-27 17:04:05 +00:00
Mike Klein
b614931bfa basic JIT support?
Codegen is unimpressive so far:

(lldb) dis -s fJITEntry -c 40
    0x10a4dd000: movq   %rdi, -0x8(%rsp)
    0x10a4dd005: movq   %rsi, -0x10(%rsp)
    0x10a4dd00a: movabsq $0x10a4eb000, %rax        ; imm = 0x10A4EB000
    0x10a4dd014: movaps (%rax), %xmm0
    0x10a4dd017: cmpq   $0x7, -0x8(%rsp)
    0x10a4dd01d: jbe    0x10a4dd066
    0x10a4dd01f: nop
    0x10a4dd020: movq   -0x10(%rsp), %rax
    0x10a4dd025: movups %xmm0, 0x10(%rax)
    0x10a4dd029: movups %xmm0, (%rax)
    0x10a4dd02c: addq   $-0x8, -0x8(%rsp)
    0x10a4dd032: addq   $0x20, -0x10(%rsp)
    0x10a4dd038: cmpq   $0x7, -0x8(%rsp)
    0x10a4dd03e: ja     0x10a4dd020
    0x10a4dd040: jmp    0x10a4dd066
    0x10a4dd042: nop
    0x10a4dd043: nop
    0x10a4dd044: nop
    0x10a4dd045: nop
    0x10a4dd046: nop
    0x10a4dd047: nop
    0x10a4dd048: nop
    0x10a4dd049: nop
    0x10a4dd04a: nop
    0x10a4dd04b: nop
    0x10a4dd04c: nop
    0x10a4dd04d: nop
    0x10a4dd04e: nop
    0x10a4dd04f: nop
    0x10a4dd050: movq   -0x10(%rsp), %rax
    0x10a4dd055: movl   $0x2a, (%rax)
    0x10a4dd05b: decq   -0x8(%rsp)
    0x10a4dd060: addq   $0x4, -0x10(%rsp)
    0x10a4dd066: cmpq   $0x0, -0x8(%rsp)
    0x10a4dd06c: jne    0x10a4dd050
    0x10a4dd06e: retq
    ...

Change-Id: I97576e7b6e0696f248853e55de4f045f2b5ce77c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/273518
Reviewed-by: Herb Derby <herb@google.com>
2020-02-27 13:01:37 +00:00
Jarrett Phillips
f9734c39b8 Adding fmls instruction
Change-Id: Ia1752196fd50ade2c3160dc401a36618433420d8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/270822
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-13 22:55:53 +00:00
Mike Klein
5cdeb390d0 only emit _imm ops when JITing for x86
There are probably ways to make this more efficient by only optimizing
what's necessary (e.g. try JIT first, then interpreter only if it fails)
and some other performance improvements to make, but for now I want to
focus mostly on keeping things simple and correct.

The line between Builder::done() and Program::Program() is particularly
fuzzy and becoming fuzzier here, and I think that'll be something
that'll change eventually.

This makes SkVMTest debug dumps more portable, though perhaps less
useful.  Might kill that feature soon now that SkVM is tested more
thoroughly in unit tests and GMs and bots and such.

Change-Id: Id9ce8daaf8570e5bea8b10f1a80b97f5b33d45dc
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/269941
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-10 19:26:05 +00:00
Mike Klein
ed9b1f1c1e refactor out a middle representation
Kind of brewing a big refactor here, to give me some room between
skvm::Builder and skvm::Program to do optimizations, bakend
specializations and analysis.

As a warmup, I'm trying to split up today's Builder::Instruction into
two forms, first just what the user requested in Builder (this stays
Builder::Instruction) then a new type representing any transformation or
analysis we've done to it (OptimizedInstruction).

Roughly six important optimizations happen in SkVM today, in this order:
   1) constant folding
   2) backend-specific instruction specialization
   3) common sub-expression elimination
   4) reordering + dead code elimination
   5) loop invariant and lifetime analysis
   6) register assignment

At head 1-5 all happen in Builder, and 2 is particularly
awkward to have there (e.g. mul_f32 -> mul_f32_imm).
6 happens in Program per-backend, and that seems healthy.

As of this CL, 1-3 happen in Builder, 4-5 now on this middle
OptimizedInstruction format, and 6 still in Program.

I'd like to get to the point where 1 stays in Builder, 2-5 all happen on
this middle IR, and 6 stays in Program.  That ought to let me do things
like turn mul_f32 -> mul_f32_imm when it's good to and still benefit
from things like common sub-expression elimination and code reordering
happening after that trnasformation.

And then, I hope that's also a good spot to do more complicated
transformations, like lowering gather8 into gather32 plus some fix up
when targeting an x86 JIT but not anywhere else.  Today's Builder is too
early to know whether we should do this or not, and in Program it's
actually kind of awkward to do this sort of thing while also doing
having to do register assignment.  Some middle might be right.

Change-Id: I9c00268a084f07fbab88d05eb441f1957a0d7c67
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/269181
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-06 20:09:53 +00:00
Mike Klein
c66882ebcd Revert "impl gather8/gather16 with gather32"
This reverts commit d4e3b9e8bc.

Reason for revert: will reland with fixes

Original change's description:
> impl gather8/gather16 with gather32
> 
> This is our quick path to JIT small gathers.
> 
> The idea is roughly,
> 
>    const uint32_t* ptr32 = ptr8;
>    uint32_t abcd = ptr32[ix/4];
>    switch (ix & 3) {
>      case 3: return (abcd >> 24)       ;
>      case 2: return (abcd >> 16) & 0xff;
>      case 1: return (abcd >>  8) & 0xff;
>      case 0: return (abcd      ) & 0xff;
>    }
> 
> With the idea that if we may load a given byte,
> we should also be allowed to load the four byte
> aligned word that byte falls within.
> 
> Change-Id: I7fb1085306050c918ccf505f1d2e1e87db3b8c9a
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268381
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com,herb@google.com,reed@google.com

Change-Id: I48d800edc6517f37e04752c91616b666a5e0f384
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268490
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-03 21:55:22 +00:00
Mike Klein
d4e3b9e8bc impl gather8/gather16 with gather32
This is our quick path to JIT small gathers.

The idea is roughly,

   const uint32_t* ptr32 = ptr8;
   uint32_t abcd = ptr32[ix/4];
   switch (ix & 3) {
     case 3: return (abcd >> 24)       ;
     case 2: return (abcd >> 16) & 0xff;
     case 1: return (abcd >>  8) & 0xff;
     case 0: return (abcd      ) & 0xff;
   }

With the idea that if we may load a given byte,
we should also be allowed to load the four byte
aligned word that byte falls within.

Change-Id: I7fb1085306050c918ccf505f1d2e1e87db3b8c9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268381
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-03 19:47:41 +00:00
Mike Klein
bc1ce2c0ca test premul/unpremul are no-ops when a==1.0f
Constant propagation means we can always notionally
unpremul and premul at the right points, and if alpha
was already opaque, they'll just drop away.

This has been true, but it's nice to put a test on it.

Change-Id: Iacd2002d9e1a10b73e800a452f377001d5ba3777
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268336
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-03 18:51:05 +00:00
Mike Klein
ba9da466cc radial gradients in skvm
- Add sqrt(), vsqrtps for x86.
- Hook into SkRadialGradient.

Change-Id: I66a4598e30fe16610c59a512f7d962323ee5134a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/267196
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-28 21:04:36 +00:00
Mike Klein
93d3fabcc3 improve scalar gather32
This loads 32 bits instead of gathering 256 in the tail part of loops.

To make it work, add a vmovd with SIB addressing.

I also remembered that the mysterious 0b100 is actually a signal that
the instruction uses SIB addressing, and is usually denoted by `rsp`.

(SIB addressing may be something we'd want to generalize over like we
did recently with YmmOrLabel, but I'll leave that for Future Me.)

Slight rewording where "scratch" is mentioned to keep it focused on
scratch GP registers, not "tmp" ymm registers.  Not a hugely important
distinction but helps when I'm grepping through code.

Change-Id: I39a6ab1a76ea0c103ae7d3ebc97a1b7d4b530e73
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264376
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-01-14 18:24:56 +00:00
Mike Klein
b2b6a99dca impl gather32 for x86
Some TODOs left over to make the scalar
tail case better... as is it issues a
256-bit gather for each 32-bit load!

I added a trimmed down variant of the existing
SkVM_gathers unit test to test just gather32,
covering this new JIT code.

Change-Id: Iabd2e6a61f0213b6d02d222b9f7aec2be000b70b
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264217
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-14 17:24:45 +00:00
Mike Klein
c322f634a3 add movq
This does the equivalent of dst = *(src + off),
which we use to find our gather base pointer in gather32.

Change-Id: I09ca7bfd404d7dce6de454ef1ed4eee78ab29932
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264216
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-01-14 03:02:35 +00:00
Mike Klein
beaa108018 add vgatherdps
A complicated instruction to say the least!

A "fun" wrinkle is that all the ymm registers must be unique!
(And the mask register is cleared by the instruction...)

Still kind of TODO is what that 0b100 r/m in the mod_rm() means.  Every
variant of the instruction I've assembled seems to have it set to 0b100
(e.g. 0x0c or 0x04) but I'd feel better if I knew what it meant.

Change-Id: Ia4ff5f8175bff545e2d10bb2d1b14f49073445a3
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264116
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-01-13 23:51:35 +00:00
Mike Klein
f22faaf254 add vroundps, impl Op::floor on x86
Change-Id: Iad94adda2da74fefb5657d883120f85ad362327e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/263461
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-09 14:19:14 +00:00
Mike Klein
92ca3baba6 JIT today's new _imm ops
- Add YmmOrLabel struct to represent the concept that many
  x86 instructions can take a final argument as either a
  register or memory address, and that they all handle them
  the same way.
- Convert existing overloads like vmulps() to use YmmOrLabel.
- upgrade some other instructions to take YmmOrLabel
- use them to implement today's new _imm ops

This feels like a good spot for implicit constructors, no?

Change-Id: I435028acc3fbfcc16f634cfccc98fe38bbce9d19
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/263207
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-08 23:38:13 +00:00
Mike Klein
a6434a5ef5 refactor bit ops
- Remove extract... it's not going to have any special impl.
  I've left it on skvm::Builder as an inline compound method.
- Add no-op shift short circuits.
- Add immediate ops for bit_{and,or,xor,clear}.

This comes from me noticing that the masks for extract today are always
immediates, and then when I started converting it to be (I32, int shift,
int mask), I realized it might be even better to break it up into its
component pieces.  There's no backend that can do extract any better
than shift-then-mask, so might as well leave it that way so we can
dedup, reorder, and specialize those micro ops.

Will follow up soon to get this all JITing again,
and these can-we-JIT test changes will be reverted.

Change-Id: I0835bcd825e417104ccc7efc79e9a0f2f4897841
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/263217
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-08 21:20:54 +00:00
Mike Klein
6dbd7ff34a first foray into SkVM image shaders
Basic support for clamp/clamp/nearest/RGBA-8888/premul images.

A couple little changes to make this work:
   - add pixel-center offset to shader (x,y)

   - change the signature of gather??() calls to work
     more naturally with how we let effects build uniforms.
     Instead of gathering directly from one of the program
     arguments, load the gather base pointer off another
     uniforms pointer, just like any other uniform.

   - remove the default argument to uniform??() so that
     they parallel the new gather??() calls more closely.
     There was only one place that was using the default
     and I think it's clearer as an explicit 0 offset.

   - centralize some more helpers onto skvm::Builder so
     we can use the in both SkVMBlitter and SkImageShader.

Some diffs:
   - very, very small color diffs probably due to slightly
     different math converting between byte and float or blending;
   - small sampling coordinate diffs where skvm + SkRP agree,
     and the legacy shaders disagree.  That's fine by me.

Change-Id: I72634e7fed4f13e6cb41b8067104760f392ea3bf
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/262368
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-01-06 19:45:53 +00:00
Mike Klein
17e2714c9f skip _imm ops on ARM
These are really designed around x86, so forcing them
on ARM where our existing non-immediate ops work better
is kind of silly.

Change-Id: I6b66ed0b0a71b335becdcb1d67dec471620542b0
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/254440
Reviewed-by: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-20 00:08:05 +00:00
Mike Klein
37be7715fd implement assert_true on ARM
This all comes together as

    uminv tmp, condition
    fmov  gp, tmp
    cbnz  gp, all_true
    brk   0
  all_true:
    ...

The key idea is uminv(vec) will return 0 if any of the inputs are 0,
and non-zero if all of the inputs are non-zero, namely 0xffffffff.

fmov moves that minimum from a vector register to a general purpose
register where we can test it with cbnz, compare and branch if non-zero.
This jumps over the `brk 0` debug trap when all inputs are true.

Change-Id: If5deb77a77f52221d0649e537179743c45eb9cc5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/254479
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-13 22:09:59 +00:00
Mike Klein
8c1e0effbb sketch out structure for ops with immediates
Lots of x86 instructions can take their right hand side argument from
memory directly rather than a register.  We can use this to avoid the
need to allocate a register for many constants.

The strategy in this CL is one of several I've been stewing over, the
simplest of those strategies I think.  There are some trade offs
particularly on ARM; this naive ARM implementation means we'll load&op
every time, even though the load part of the operation can logically be
hoisted.  From here on I'm going to just briefly enumerate a few other
approaches that allow the optimization on x86 and still allow the
immediate splats to hoist on ARM.

1) don't do it on ARM
A very simple approach is to simply not perform this optimization on
ARM.  ARM has more vector registers than x86, and so register pressure
is lower there.  We're going to end up with splatted constants in
registers anyway, so maybe just let that happen the normal way instead
of some roundabout complicated hack like I'll talk about in 2).  The
only downside in my mind is that this approach would make high-level
program descriptions platform dependent, which isn't so bad, but it's
been nice to be able to compare and diff debug dumps.

2) split Op::splat up
The next less-simple approach to this problem could fix this by
splitting splats into two Ops internally, one inner Op::immediate that
guantees at least the constant is in memory and is compatible with
immediate-aware Ops like mul_f32_imm, and an outer Op::constant that
depends on that Op::immediate and further guarantees that constant has
been broadcast into a register to be compatible with non-immediate-aware
ops like div_f32.  When building a program, immediate-aware ops would
peek for Op::constants as they do today for Op::splats, but instead of
embedding the immediate themselves, they'd replace their dependency with
the inner Op::immediate.

On x86 these new Ops would work just as advertised, with Op::immediate a
runtime no-op, Op::constant the usual vbroadcastss.  On ARM
Op::immediate needs to go all the way and splat out a register to make
the constant compatible with immediate-aware ops, and the Op::constant
becomes a noop now instead.  All this comes together to let the
Op::immediate splat hoist up out of the loop while still feeding
Op::mul_f32_imm and co.  It's a rather complicated approach to solving
this issue, but I might want to explore it just to see how bad it is.

3) do it inside the x86 JIT
The conceptually best approach is to find a way to do this peepholing
only inside the JIT only on x86, avoiding the need for new
Op::mul_f32_imm and co.  ARM and the interpreter don't benefit from this
peephole, so the x86 JIT is the logical owner of this optimization.
Finding a clean way to do this without too much disruption is the least
baked idea I've got here, though I think the most desirable long-term.

Cq-Include-Trybots: skia.primary:Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SK_USE_SKVM_BLITTER,Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Release-All-SK_USE_SKVM_BLITTER
Change-Id: Ie9c6336ed08b6fbeb89acf920a48a319f74f3643
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/254217
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2019-11-12 20:17:55 +00:00
Mike Klein
749eef6b1a implement assert_true on x86
The logic implemented here is roughly

  assert_true(v):
     if any ~v {
         int3()
     }

  in assembly as

  ```
    vptest v, constant 0xffffffff mask
    jc ok
    int3
  ok:
  ```

jc branches if (~v & mask) are all zero, with mask set fully, that's
branch if ~v are all zero, which is to say, v are all ~0, true.  So we
jump over the int3 breakpoint if v are all true.

Cq-Include-Trybots: skia.primary:Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SK_USE_SKVM_BLITTER
Change-Id: Ie0fc1da15b1a0dba00c66af610ccde18f5985f8a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253897
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
2019-11-12 20:07:35 +00:00
Mike Klein
ee5864a170 add int3, vptest, jc
Will use these to implement assert_true on x86.

Change-Id: I9d2595a35518b6971dd8e418b583febd3960c7f6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253896
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2019-11-11 16:33:01 +00:00
Mike Klein
1360117174 add assert_true()
This is an assert that is active in debug mode.  For the moment it only
works in the interpreter, but I plan to follow up with JIT code too.

assert_true() is a data sink like a store() as far as lifetime goes,
though we take care to allow it to be hoisted if its inputs are.  An
assert_true's existence will keep all its inputs alive, and in release
builds where we skip the instruction, those inputs will all drop away
automatically.

Tested locally by forcing the interpreter.  It shouldn't be long before
I have at least x86 JIT asserts working too.

Change-Id: I7aba40d040436a57a6b930790f7b8962bafb1a8c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253756
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-11 15:12:18 +00:00
Mike Klein
6e4aad91c3 rename to_i32 -> trunc, and add round
This plumbs through round but doesn't use it.  I want that change to be
its own CL.  It's nice to have assembler support and the name changes
even if I revert using round.

Change-Id: I6d67ec5c63546069eb7cc1c91599b599bafcda66
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253724
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-08 21:00:51 +00:00
Mike Klein
a53e47fe94 native f32 min/max
No diffs.

Change-Id: Ia0b35c2787e27d74763f21b81072affa6caf1e5a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253720
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2019-11-08 20:21:38 +00:00
Mike Klein
81a8d282d3 Reland "hook up float comparisons to x86 JIT"
This is a reland of 12cea8d6c4

Now implementing float comparisons on ARM also.
Only vaguely tricky thing is that x!=y is ~(x==y).

Original change's description:
> hook up float comparisons to x86 JIT
>
> This gets the draws in gm/skvm.cpp all JITing again,
> and in one of the unit tests.
>
> (Everything draws the same of course.)
>
> Change-Id: Iada28690d9df78f9d444ee3765e21beb29239672
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253166
> Auto-Submit: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

Cq-Include-Trybots: skia.primary:Test-Android-Clang-NVIDIA_Shield-CPU-TegraX1-arm64-Debug-All-Android
Change-Id: I771b8a327a958db8a0d509d55863ade935a00035
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253401
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 18:55:14 +00:00
Mike Klein
3f7c865936 avoid the JIT on MSAN builds
JIT code isn't MSAN-instrumented, so we won't see when it uses
uninitialized memory, and we'll not see the writes it makes as properly
initializing memory.  Instead force the interpreter, which should let
MSAN see everything our programs do properly.

This refactors so that SkVM.cpp is the only code to look at whether
SKVM_JIT is defined, and undefines it when built with MSAN.  Added
a simple regression test too.

Cq-Include-Trybots: skia.primary:Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Debug-All-MSAN
Change-Id: Ic7cca2621f84dfba7174127738744d6c68f85f2e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253410
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 18:24:34 +00:00
Mike Klein
297d5a03e6 Revert "hook up float comparisons to x86 JIT"
This reverts commit 12cea8d6c4.

Reason for revert: unit tests failing on ARM... will try again once I have float comparisons implemented for ARM too.

Original change's description:
> hook up float comparisons to x86 JIT
> 
> This gets the draws in gm/skvm.cpp all JITing again,
> and in one of the unit tests.
> 
> (Everything draws the same of course.)
> 
> Change-Id: Iada28690d9df78f9d444ee3765e21beb29239672
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253166
> Auto-Submit: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com,reed@google.com

Change-Id: Ie07e580b4998199338217a27d4fad34c679ffc23
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253399
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 15:35:12 +00:00
Mike Klein
12cea8d6c4 hook up float comparisons to x86 JIT
This gets the draws in gm/skvm.cpp all JITing again,
and in one of the unit tests.

(Everything draws the same of course.)

Change-Id: Iada28690d9df78f9d444ee3765e21beb29239672
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253166
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 14:36:32 +00:00