This is just like mul(F32,F32) but optimizes 0*x == 0.
Use it in SkSLVMGenerator; sksl already applies this optimization.
PS2 has a sneaky version using % as a fast_mul() operator, and
PS3 has a sneakier version using ** instead.
We could of course write this all out using fast_mul() the long way,
but I found that quickly became difficult to read.
Change-Id: Iae35ce54411abc00e7729e178eb6a10f151a5304
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/368838
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
We can of course use allow_jit to test with and without JIT!
This testing was the only reason Program::dropJIT() was public. Given
how tricky its implementation is, I'd rather keep it a private detail
than exposed API, in case one day we find need to make it impossible.
Change-Id: Ifa256355309d9baf1bae506d75951381dce9b53c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/367896
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
We have a global flag controlling whether skvm::Programs JIT,
and this adds a per-Program flag to skvm::Builder::done().
Use it for single-color color filtering, and add a unit test.
Change-Id: I3a87761c8c6b818111d03c97b31f8b30d9f2c194
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/367856
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
store() returns a success bool only because it can, because it wasn't
going to return any sort of skvm::Value anyway. But it should never
fail given a well-formed skvm::PixelFormat, e.g. one from
SkColorType_to_PixelFormat. So move the "error handling" inside, really
just asserting/assuming it doesn't fail.
And similarly, skvm::SkColorType_to_PixelFormat() can no longer fail, so
have it return the skvm::PixelFormat directly instead of the bool I used
to stage things back when building this out.
Change-Id: I6dc3b6da32cdaaef377fe59b8c94846e902841ee
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/367796
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
These have a kind of neat way of encoding the lane index, using the Q
bit to pick the lower or upper 64 bits of the register, then the S bit
to pick the 32-bit lane within those 64 bits. Usually Q=1 distinguishes
a 128-bit op from a Q=0 64-bit op, so its repurposing here is at first
surprising, but actually very fitting.
I'd eventually like load64/128 to use these like this:
Reg tmp0 = alloc_tmp(2),
tmp1 = (Reg)(tmp0+1);
if (scalar) { a->ld24s(tmp0, arg[immA], 0); }
else { a->ld24s(tmp0, arg[immA] ); }
mark_tmp_as_dst(tmp0, tmp1);
where the mechanism to track up to four registers per value and
implement mark_tmp_as_dst(...) for more than one argument is what I'm
still working on.
Change-Id: I944e571de19f65d41f462406ce35f0f2a35bb381
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/360700
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
store64 and store128 will use the st?.4s instructions,
and load64/load128 the ld?.4s. The tricky bit for both
of course is that they load and store more than a single
register, and that those registers need to be adjacent.
Change-Id: I613d06cbcc6e00bfc16b1a2c88412dbbbb1c55ed
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/356344
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This makes almost all existing code read more clearly.
Change-Id: I314331d9aa2ecb4664557efc7972e1a820afc250
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/352085
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
We've been assuming that all Ops with the same arguments produce the
same value and deduplicating them, which results in a simple common
subexpression eliminator.
But we can't soundly dedup two identical loads with a store between;
that store could change the memory those loads read, producing different
values, as demonstrated by the first new unit test.
Then, by similar reasoning, it may first seem fine to deduplicate
stores, e.g.
store32 arg(0), v1
store32 arg(0), v1
That second store certainly does look redundant. But if we slot a
different store between, it's no longer redundant:
store32 arg(0), v1
store32 arg(0), v2
store32 arg(0), v1
If we dedup those two v1 stores, we'll skip the second and be left with
v2 in our buffer instead of v1. This is the second new unit test.
Now, uniform32 and gather ops also touch memory... are they safe to
dedup? Surprisingly, yes! Uniforms are easy: they're read-only. No
way to store to uniforms, so no intervening store can invalidate them.
Gathers are a little fuzzier, in that the buffer we gather from is
uniform in practice, but not strictly required to be... it's not
impossible to construct a program that gathers from a buffer that the
program also stores to, but you'd have to go out of your way to do it,
and it's not a pattern we use today, and SkVM does not provide the
synchronization primitives you'd need to make attempting that even
vaguely sensible. So gathers in practice can also be deduplicated.
In general it's safe to dedup an operation unless it touches _varying
memory_, i.e. loads and stores. uniform32 and gathers touch
non-varying memory, so they're safe, and while index is varying, it
doesn't touch memory.
Change-Id: Ia275f0ab2708d3f71e783164b419436b90f103a9
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/350608
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
I noticed is_always_varying() is a little wrong, and this new test demos
how. This isn't terribly important: in most practical situations
gathers will indeed be varying.
Change-Id: I456d4c7287147726c49ebb5af5af347c65cd21d4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/350602
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
The new unit test demonstrates load/store reordering is error-prone.
At head we're allowing loads from a given pointer to reorder later than
a store to that same pointer, and boy, that's just not sound. In the
scenario constructed by the test we reorder this swap,
x = load32 X
y = load32 Y
store32 X y
store32 Y x
using schedule() (following Op argument data dependencies) into
y = load32 Y
store32 X y
x = load32 X
store32 Y x
which moves `x = load32 X` illegally past `store X y`.
We write `y` twice instead of swapping `x` and `y`.
It's not impossible to implement that extra reordering constraint: I
think it's easiest to think about by adding implicit use edges in
schedule() from stores to prior loads of the same pointer. But that'd
be a little complicated to implement, and doesn't handle aliasing at
all, so I decided to ponder on other approaches that handle a wider
range of programs or would have a simpler implementation to reason
about. I ended up walking through this rough chain of ideas:
0) reorder using only Op argument data dependencies (HEAD)
1) don't let load(ptr) pass store(ptr) (above)
2) don't let any load pass any store (allows aliasing)
3) don't reorder any Op that touches memory
4) don't reorder any Op, period.
This CL is 4). It's certainly the easiest and cheapest implementation.
It's not clear to me that we need this scheduling, and should we find we
really want it I'll come back and work back through the list until we
find something that meets our needs.
(Hoisting of uniforms is unaffected here.)
Change-Id: I7765b1d16202e0645b11295f7e30c5e09f2b7339
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/350256
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Should allow us to test sksl->skvm portably using dump().
Change-Id: If55e8e144f04643c02bd65baa84158ac1bf441b5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/346336
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
None of these earliest testing tools are useful anymore
now that we can do useful work with SkVM and SkVMBlitter.
Change-Id: I8b25ef6ddd101c4ff8617c6742343dedb4764922
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/345456
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Brian Osman <brianosman@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Change-Id: I233398bf34411231d44613d89aed28935fe30a28
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/340796
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This finishes up the existing SkVM ops on arm64.
I wish I had a unit test for this, but there's no diffs drawing RGBA F32.
Change-Id: I53725769fa2e7a1701f7360905205356e1ea18cb
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/340718
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Nothing too tricky here.
Change-Id: I48e51c301e53efc63fc92c378fe45a0e5a2df7e6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/340520
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Much like store64, load64 really wants to use ld2.4s but that needs a
way to allocate adjacent registers. So, just like store64, do it
manually, this time with uzp (unzip).
Change-Id: Ie10cc8d2df57390d1c6709bd7485bb5158375078
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/340519
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Ideally we'd use st2.4s here but that needs its inputs in adjacent
registers, and I don't have a mechanism for that (yet). So instead
interlace manually using zip1/zip2.
Tested by SkVM_64bit.
Change-Id: I7b05fcd1f4398012755fc4f0d4e39743d0c69a94
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/340518
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Change-Id: If22eabb68b9293f5bc1d275535135d9760fe1ae5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/339578
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
fp16 is a more precise name, given that there are things like bfloat16,
and this may free up the word "half" for the same sort of more nebulous
format as we use it in SkSL.
Change-Id: I55c39f3670f2c300b9306c92a86c4ec7a2e7b5d7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/339577
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
- easy: ceil, floor, sqrt
- index is our first arm64 instruction to need a temporary,
but other than that is pretty simple, just N - iota as usual.
With Op::index now supported, `viewer --slide GM_runtime_shader`
frame time drops from ~1ms to ~0.24ms.
I accidentally swapped in a float-subtract for an int-subtract and
everything worked fine. o_O
Change-Id: I44c51506a6a9014b398d6943bb0e3712e4e52445
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338661
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Uniforms in practice are always pointers or 32-bit ints or floats, so
these are essentially dead code. The change to SkVMBlitter.cpp is the
only interesting change, and I think it makes more sense now than
before. The program will need float coverage in the end, so might as
well feed it one directly.
Change-Id: I7f1e77731cf10ccc35595012a6df4f9e54a0dad8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338631
Commit-Queue: Mike Reed <reed@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Now that I've been reminded that half-float compute is real and no
longer just a dream, Q14 kind of pales in comparison, and just gets in
my way when working on SkVM.
As usual I've left in assembler support and unit tests for those
instructions. The instructions are all pretty easy to keep working and
tested and don't get in the way, unlike the real "let's do Q14" stuff.
None of this Q14 code was hooked up to anything but unit tests, so no
capability lost here, and no diffs. As always, it'll be easy to restore
should we ever want to by looking at this CL.
Change-Id: Ia42a96652b381267a7c3ec563b5978efcfc717a7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338630
Auto-Submit: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Mike Reed <reed@google.com>
I'm not using any of these, so nice to move them aside.
Change-Id: Id43c1606c2f9e6bba0d8f6bd7d2f8f5e02d5b762
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/338629
Auto-Submit: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Mike Reed <reed@google.com>
This adds assembler support for a bunch of ARM instructions and uses
them to implement a bunch of SkVM ops. No diffs.
movs() seems strictly more useful than fmovs(), so I've replaced it.
Change-Id: Ied38a44461653598269421b0b56bef4eb19bb1e9
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/335918
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
pack(x,y,bits) as an alias for x|(y<<bits) only existed originally to
implement it with the SLI arm64 instruction, but I've since realized
that was misguided.
I had thought the assumption on pack ("(x & (y << bits)) == 0"), i.e.
"no overlap between x and the shifted y", was enough to make using SLI
legal, but it's actually not strong enough a requirement.
The SLI docs say "...inserts the result into the corresponding vector
element in the destination SIMD&FP register such that the new zero bits
created by the shift are not inserted but retain their existing value."
The key thing not mentioned there happens with zero bits _not_ created
by the shift, the ones already present at the top of y. They're of
course inserted, overwriting any previous values.
This means SLI (and so pack()) become strictly order dependent in a way
I had never intended. This will work as you'd think,
skvm::I32 px = splat(0);
px = pack(px, r, 0);
px = pack(px, a, 24);
but this version swapping the two calls to pack() will overwrite alpha,
skvm::I32 px = splat(0);
px = pack(px, a, 24);
px = pack(px, r, 0);
I find that error-prone, so I've removed Op::pack and replaced it
with a simple expansion to x|(y<<bits). That of course works in either
order.
This new test can't JIT at head, but if we implement the other missing
instructions (soon, dependent CL) it would start failing when JIT'd.
The interpreter and x86 were both fine, since they're both doing what's
now the only approach to pack(), the simple x|(y<<bits).
I've left assembler support for SLI in case we want to try it again.
Change-Id: Iaf879309d3e1d0a458a688f3a62556e55ab05e23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/337197
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This tweak makes these conversion functions' names
match the names of the types, e.g. to_F32() makes F32.
Change-Id: I4d71c9bd17d835d09375e3343ee4316082b02889
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/319763
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
I've been thinking and rethinking and rethinking how best to use 16-bit
values like Q14 fixed-point in SkVM. Here's some ways:
A) don't... just use 32-bit values instead
B) use 16x2-bit pairs to match the narrower 32-bit lane count
C) double-pump 32-bit values to match the wider 16-bit lane count
D) use native 16- and 32-bit values and let the backends sort it out
A) is how things work today, and C) is how SkRasterPipeline's lowp mode
works. Having tried out B) and C) both for a good fair shake, they were
both already awkward to work with after writing just a few functions. I
would not give up on them entirely, but they're no longer my favorites.
D) is subtle and my new favorite. It's easiest to program with SkVM
when the values we're holding represent single values and the backend
handles any parallelism for us. That suggests we add a simple 16-bit
Q14 to the existing 32-bit I32 and F32 types, where they can be actively
converted between as normal, but not freely no-op bit punned. D) says
we people shouldn't have to choose between A-C) up front... each backend
can handle it themselves.
Under strategy D), it's entirely the backend's job to decide how to
represent each value, and how to to vectorize them. We don't need to
know as a user, and the backends can use the program itself to inform
how they vectorize. 16-bit values could live in xmm registers and
32-bit values in ymm, or the 16-bit values could go in the low half of a
ymm, or the even lanes of a ymm, or a full ymm and use two for 32-bit
values, etc. etc. This all is a backend choice, not something we should
have to know about when writing a program using Q14/I32/F32.
My next steps are to get Q14 operations tested and plumbed through the
JIT again, and to build out a blitter and a few effects using Q14 color
channels. Then, independently, we can look at each backend and how to
vectorize them. Some ideas:
1) keep running at current vectorization, with half rate 16-bit ops
2) pump up to 2x wider vectorization unconditionally to favor 16-bit
3) pump up to 2x wider vectorization only when any 16-bit op is used
These choices can be made independently for each backend (JIT, LLVM,
interp), and I wouldn't be surprised to find that we'll want to do them
differently. For instance, the interpreter is already running at 32x
vectorization... might be pumping it higher won't help anything.
Change-Id: Ib8ad2b1bf790e8c4e3acfb4818d4032f7628e8f8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/319321
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
I suspected this wouldn't work, but turns out it's fine.
Change-Id: I91042d458804f3a6af29389de5e6622a39cf23e5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317309
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
It shifts a negative number left,
which UBSAN reminds us is not allowed in C++.
Cq-Include-Trybots: luci.skia.skia.primary:Test-Debian10-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SkVM_ASAN
Change-Id: I955d53b677f605b3a0f6e2bd31b16e2cc2f64ac5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317260
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
These tests cover all the new Q14x2 ops.
As usual, left myself a few TODOs...
While working on this, I uncovered a little subtly about the blend
instructions we use to implement Op::select... unlike the platonic
(cond & true_val) | (~cond & false_val)
the hardware istructions are not bitwise! Each of the fancy blend
instructions like vblendvps or vpblendvb has a fundamental granularity
larger than a bit (4 bytes for vblendvps, 1 byte for vpblendvb). If
you're using a mask with a granularity of say, 2 bytes, you need to be
using something with equally fine granularity --- bitwise is ok,
bytewise is ok, 2-byte-wise is ok, but 4-byte-wise isn't.
Took a quick survey, and the Op::select we're using for x86 and ARM
JITs are both bytewise, so I think they're fine. Would have to think
a bit about LLVM, but these unit tests should at least fire if it were
wrong. The skvx if_then_else() I've been using in the interpreter has
been 4-byte-wise, but I'm refining that down to 1-byte-wise with
https://skia-review.googlesource.com/c/skia/+/317170.
Change-Id: I09cbc8b91cdb9e50088ee4f6ddf202faa1bf2cb1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317159
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Add a slew of 16-bit instructions for experiments.
I want to try a fixed-point path through SkVMBlitter, continuing to
represent geometry with F32, but color channels in 16 bits, with several
possible representations:
- unorm8 lowp like SkRasterPipeline (0 -> 0.0, 0x00ff -> 1.0)
- 15-bit SkFixed15 fixed-point (0 -> 0.0, 0x8000 -> 1.0)
- 14-bit signed fixed-point (0 -> 0.0, ±0x4000 -> ±1.0)
I'm leaning towards the 14-bit version for being able to hold a good
range of temporary values in [-2,2), or perhaps even a 13-bit analog for
even a little more safety range. Mostly something new to try.
Most of these instructions are pretty obvious, with notes on a few:
vpavgw is an unsigned (x+y+1)>>1, and is useful for converting
unorm8 up to Q14. There are a couple ways to do this pretty well,
and using vpavgw is the best, and uses the fewest instructions:
A) (x << 6) + ( x >> 2) + (x == 255) // Ok approx.
B) (x << 6) + ((x+1) >> 2) // Better approx.
C) vpavgw(x << 7, x >> 1) // Perfect math!
The best good reverse math I've found is (x >> 6) - (x > 16319).
vpmulhrsw is the key to the whole thing as usual, letting us do
16x16->16-bit multiplies. An SkFixed15 multiply is vpmulhrsw
followed by vpabsw (also added here), and a Q14 multiply is
vpmulhrsw followed by a simple <<1.
I've added both signed and unsigned min and max. Not entirely
sure they'll all be used, but I do have my eye on vpminuw as a
single-instruction clamp to [0,0x4000] ~~> [0.0,1.0], treating
any negative Q14 as very large unsigned.
Change-Id: I0db7f3f943ef6c9a600821444cc5b003fe5f675d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/317119
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Added vpinsrd to make it easy, and fixed a comment on vpinsrb's tests.
Nothing too tricky, just the naive implementations.
The hardest part was getting all the data to the right places.
No diffs!
Change-Id: Ie4c1f1e429abfa75ca80a93d108061287d5ace80
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306872
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This noop refactor ports the lane-immediate idea to the 64-bit loads and
to 128-bit stores, and renames to simple load64, load128, store128.
The store128 part of this change is _slightly_ sneaky in that we pack
the argument pointer index and the lane both into int immz, but there's
plenty of space there: lane needs 1 bit, and the argument pointer index
needs maybe 3 or 4 max.
Change-Id: I3fa01bf31312b8a69c7e287d649470ba15a8ea40
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306810
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This allows us to JIT functions taking one more argument.
Our x86 JIT was the weak link: LLVM handles arbitrary numbers
of arguments, and our aarch64 JIT handles up to seven already.
I plan to pass one more source varying sometimes, and in extremely
unusual cases it we could need six pointers:
1) uniforms
2) destination
3) 8-bit coverage plane
4) 8-bit multiply plane
5) 8-bit add plane
6) source varying
Those varyings 3-5 are all indexable off the coverage pointer 3) if we
know the dimensions, so I could be convinced that we should only pass
one there, making our maximum number of arguments four. I'll be looking
at that independently, but it doesn't hurt to have our capability to go
up to six either way.
Change-Id: Id7a5b88e382a95bb560633e95c5be273b7ea67d1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306241
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
The 3 bits that represent rsp are used as a signaling mechanism to
choose between compact base + disp encoding and full base + disp +
scale*index encoding, one byte longer. If a base is encoded as rsp in
the MOD R/M byte, an SIB byte follows.
r12 shares its 3 bottom bits with rsp, so we need to treat it like rsp
when deciding whether or not we need an SIB byte. (As usual registers
r8-15's distinguishing upper bit is carried by a REX/VEX prefix.)
rsp is also treated as a special signal in the index field of the SIB
byte, meaning essentially scale=0, and as a result it's not possible to
use rsp as an index. It _is_ possible to use r12 as an index, with a
test added here. This worked without any code change. It seems the
index=rsp -> scale=0 signal is triggered by all four bits of the index
registger, including the X upper bit from REX/VEX.
I have found these charts useful:
https://wiki.osdev.org/X86-64_Instruction_Encoding#32.2F64-bit_addressing_2
Looking at those charts I noticed that rbp/r13 are also special cases,
so I'm eyeing them warily and will avoid using them for now.
Change-Id: Id78f826a39c060b03000eae7c50c642ef44d57db
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/306237
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
vpermps (added here) makes this very easy,
with an index controlling what 32-bit values go where.
A index of the form {0,2,4,6|?,?,?,?} will put the 4 low 32-bit halves
of 4 64-bit values in lanes 0,1,2,3. We can use that twice to get all 8
low halves, then our new vperm2f128 to put them together. Conveniently
vpermps can also load directly from memory:
vpermps (%rdi), {0,2,4,6|?,?,?,?}, lo
vpermps 32(%rdi), {0,2,4,6|?,?,?,?}, hi
vperm2f128 0x20, lo,hi, dst
We don't care what those top four indices are for load64_lo, so we'll
use them as the indices for load64_hi. That makes the full index
{0,2,4,6|1,3,5,7}, and load64_hi will just vpermf128 the other 128-bits
of lo/hi:
vpermps (%rdi), {?,?,?,?|1,3,5,7}, lo
vpermps 32(%rdi), {?,?,?,?|1,3,5,7}, hi
vperm2f128 0x31, lo,hi, dst
vpermps needs its index in a register, so we use a temporary for that.
Our logical lo can alias dst, and hi can alias that index, so it's just
one extra temporary register in the end.
Change-Id: Ie6a4efbf12ddada45dd09c0f580fa7350cf3019e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/305171
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Add some packing instructions to make it possible.
The gist is that we've got
r(x) = {a,b,c,d|e,f,g,h}
r(y) = {i,j,k,l|m,n,o,p}
where r(x) holds each low 32-bit half of a 64-bit value,
and r(y) holds the high halves. We want to write
a,i,b,j,c,k,d,l,e,m,...
So first the vpunpck[lh]dq instructions produce
L = {a,i,b,j|e,m,f,n}
H = {c,k,d,l|g,o,h,p}
which gets us halfway there. The vperm2f128s select the low (0x20) or
high (0x31) 128-bit halves of L/H, so we end up writing to memory
dst+0: a,i,b,j,c,k,d,l
dst+32: e,m,f,n,g,o,h,p
Existing tests cover that store64 works.
Change-Id: Ic00ad9bdb448b79867584c27cf0114a42ed32379
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/305156
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Three TODOs, all basically the same idea: divide-by-zero is not the only
way to produce non-finite results from a division. You can also divide
by very-near-zero, and maybe some other ways.
Added is_finite() to make this clear. is_finite() is almost as cheap as
the comparisons it replaces, so performance shouldn't be affected.
Change-Id: I0a803e9ab4e3286f4e10a13d3aacee370eaaa803
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/304669
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This adds load/store ops for 64-bit values, with two load64 instructions
returning the low and high 32-bits each, and store64 taking both.
These are implemented in the interpreter and tested but not yet JIT'd
or hooked up for loading and storing 64-bit PixelFormats. Hopefully
those two CLs to follow shortly.
Change-Id: I7e5fc3f0ee5a421adc9fb355d0b6b661f424b505
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/303380
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
- add f32<->f16 functions to skvx
- add f32<->f16 x86 instructions to skvm::Assembler
- add f32<->f16 ops to skvm,
using the skvx functions in the interpreter
Still TODO:
use the new x86 instructions in the JIT
(For now like in many other ways, the aarch64 JIT
continues to languish. Will pick that back up one day.)
Change-Id: Ib8dc1ccdc75ecb23769ea4947d66d3ab22520f23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/302942
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This is just minor little stuff I've been meaning to do,
with essentially no impact anywhere.
- Add an easy-to-flip switch to disable the JIT.
- Stop checking so carefully whether we hasJIT()
in test_jit_and_interpreter(). This was helpful
for making progress but now just gets in the way.
Change-Id: I08065ba1f42700f9d7d63f8303af357ec5fe11ae
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/302944
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Just a little follow up, adding a mem->xmm vmovups
instruction to make it possible. Nothing tricky.
Change-Id: I319e11839e44ccda46e664c82fb858a18499f9be
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/299883
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
It's just a shortcut for
Assembler::Label l;
a->label(&l);
and it never really took off.
It's easier to work on Label without it.
Change-Id: I4a060f78f235ac3fcc87b996f5d9404ffba43c53
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/288997
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
On Windows each time the test is run one gets
sk_fopen: fopen("resources\SkVMTest.expected", "wb") returned nullptr (errno:22): Invalid argument
due to 'expected' holding a read lock on the file when trying to open
the same file for writing. Windows locks files aggressively and in this
case there is no good reason to keep the read access when trying to
truncate and write to this file path.
This also changes the logic to only update the file in the error case
when the content would actually change. Previously it seems this file
would be re-written (usually with the same content) every time this test
ran.
Change-Id: I9c96f1e7e0692e57326fec351c7353c423014c9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/287381
Commit-Queue: Ben Wagner <bungeman@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
Rename apply() to unary(), then add binary().
Fix unary to calculate N=base-inst+1.
Convert to simpler `auto&& fn` mode by renaming
approx_atan(y,x) to approx_atan2. Now we can pass
functions, lambdas, non-lambda functors, whatever.
Change-Id: I17a6aa137f224edc0accd0509c5023a30980fe39
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/286900
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Mike Reed <reed@google.com>
We're getting this wrong today, and likely also these several
other instructions. We need to account for the immediate byte
that follows the ip-relative offset!
Add imm_byte_after_operand() to take care of this.
Change-Id: If0f4359b0a8e9d769bfde0d8456726e82f798123
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/285237
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This is a reland of 1283d55f35
... this time, also checking for HSW feature set.
Original change's description:
> Reland "gather8/16 JIT support"
>
> This is a reland of 54659e51bc
>
> ... now expecting not to JIT when under ASAN/MSAN.
>
> Original change's description:
> > gather8/16 JIT support
> >
> > The basic strategy is one at a time, inserting 8- or 16-bit values
> > into an Xmm register, then expanding to 32-bit in a Ymm at the end
> > using vpmovzx{b,w}d instructions.
> >
> > Somewhat annoyingly we can only pull indices from an Xmm register,
> > so we grab the first four then shift down the top before the rest.
> >
> > Added a unit test to get coverage where the indices are reused and
> > not consumed directly by the gather instruction. It's an important
> > case, needing to find another register for accum that can't just be
> > dst(), but there's no natural coverage of that anywhere.
> >
> > Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> > Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> > Reviewed-by: Herb Derby <herb@google.com>
> > Commit-Queue: Mike Klein <mtklein@google.com>
>
> Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>
Change-Id: Id0e53ab67f7a70fe42dccca1d9912b07ec11b54d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284504
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This reverts commit 1283d55f35.
Reason for revert: one more try...
Original change's description:
> Reland "gather8/16 JIT support"
>
> This is a reland of 54659e51bc
>
> ... now expecting not to JIT when under ASAN/MSAN.
>
> Original change's description:
> > gather8/16 JIT support
> >
> > The basic strategy is one at a time, inserting 8- or 16-bit values
> > into an Xmm register, then expanding to 32-bit in a Ymm at the end
> > using vpmovzx{b,w}d instructions.
> >
> > Somewhat annoyingly we can only pull indices from an Xmm register,
> > so we grab the first four then shift down the top before the rest.
> >
> > Added a unit test to get coverage where the indices are reused and
> > not consumed directly by the gather instruction. It's an important
> > case, needing to find another register for accum that can't just be
> > dst(), but there's no natural coverage of that anywhere.
> >
> > Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> > Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> > Reviewed-by: Herb Derby <herb@google.com>
> > Commit-Queue: Mike Klein <mtklein@google.com>
>
> Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>
TBR=mtklein@google.com,herb@google.com
Change-Id: I953fcd2aef308fd901880618fa540ac9f6d88e84
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284503
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This is a reland of 54659e51bc
... now expecting not to JIT when under ASAN/MSAN.
Original change's description:
> gather8/16 JIT support
>
> The basic strategy is one at a time, inserting 8- or 16-bit values
> into an Xmm register, then expanding to 32-bit in a Ymm at the end
> using vpmovzx{b,w}d instructions.
>
> Somewhat annoyingly we can only pull indices from an Xmm register,
> so we grab the first four then shift down the top before the rest.
>
> Added a unit test to get coverage where the indices are reused and
> not consumed directly by the gather instruction. It's an important
> case, needing to find another register for accum that can't just be
> dst(), but there's no natural coverage of that anywhere.
>
> Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>
Change-Id: I67f441615b312b47e7a3182e85e0f787286d7717
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284472
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>