I happened to have this on when profiling skottie_tool and got curious
why I was seeing the interpreter run and not JIT code. Mostly this
moves the code in bulk out of SkVMTest.cpp to SkVM.cpp so that code in
SkVM.cpp can call dump() on itself.
Also this CL has the skvm::Program hang onto the original value-based
builder program (in addition to its own interpreter program and JIT
program if we can). This is entirely so that when JIT bails out I
can have it dump out both the builder and interpreter programs for
more debugging aid.
I'm still going to need more debug tools somewhere to figure out
what the program that needs 17 registers is, and what to do about
it.
Finally, remove skvmtool. It's annoying to maintain its build
rules, and I don't use it much if ever anymore.
Change-Id: I995d15d04bda79ddfc4d68bda8aaa3b5b9261f08
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/242520
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This makes the register recycling checks a bit more
precise. At head we never recycle a register that's
holding a hoisted value, which is overly conservative.
We really should never recycle a register that's still
needed. By extending the lifetime of any hoisted value
that's used in the loop, we prevent that, while still
allowing hoisted values that are only used in hoisted
computation to be reused.
This takes just a small tweak in the JIT code (removing
the !hoisted({x,y,z}) checks), and a somewhat larger
refactoring in the interpreter, making both hoisted and
non-hoisted code go through the same recycling register
assignment flow.
There's one diff in the existing cases where we now
reuse a hoisted register, and I've added a second test
just to make sure it's covered explicitly.
Change-Id: I25b37ab1f1fea3042d7fd167529abc8fed1dddff
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/233239
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Add vpblendvb, vpcmpeqd, and vpcmpgtd, to implement select and eq/lt/gt.
I want to think just a touch bit more about neq, lte, and gte.
This is enough to JIT everything SkVMBlitter creates today.
There are 24 possible argument orders to vpblendvb,
so I'm sure I've got them wrong somehow, even with the new test.
Change-Id: I357664b866d8258a2b5438d520f47542ad581c50
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232060
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Nothing particularly tricky. Very much like store8,
but with one fewer shuffle.
This lets some of the 565 blitters JIT!
Change-Id: I853905bda30a0cda89f3fcb5fef1dfe62725063b
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232059
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Very similar to load8.
The only interesting thing is how different vpinsrw is from vpinsrb (and
vpinsrd and vpinsrq)... different map of instructions entirely.
Change-Id: Ia413b83604dd2d277d59495c5f693f505c35be9f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232058
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Add vbroadcastss(Ymm, Xmm) and expand movzbl() to support an offset.
The sequence
movzbl (load a byte, zero-extend to 4 byte)
vmovd (move that 4 byte value to an xmm)
vbroadcastss (broadcast low 4 bytes of xmm to all lanes of ymm)
implements uniform8.
Change-Id: I1d3125920d19dcb3cad5980495310bc95b9dffee
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232057
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
vbroadcastss with a register argument and immediate offset implements
uniform32. And this turns on the JIT for some new SkVMBlitter paths
that pass more arguments that I'd previously wired up, so add a few
more.
Change-Id: I66db1286dcdb2c4a4ba7c43f2dc2cd13564d4d34
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232056
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Just because a program doesn't read from or
write to memory doesn't mean it's pointless.
Oh wait, yes it does. It shouldn't crash though.
Change-Id: I6a9c26c065831f9598afccce6e0a34a178cbd925
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230839
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Fills in a little TODO.
Also add assembler unit tests for similar float->int and int->float.
Tested by SkVM_mad, SkVM_madder.
Change-Id: I5334029927fdecb0ff7f5a3b081cf2ce7b23995c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230838
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Backfill tests for some uncovered operations.
Mostly functional tests, with some targeted at particular
corners of code generation, and a few assembler unit tests.
This doesn't get up to 100% line coverage, but I think it
covers all the non-failure cases.
Change-Id: Ib701df0c505aa41e3b2fe3cd429447acb294b752
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230837
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Change-Id: I58b52d3e1d05d0834be30e00d991636e227cbf0b
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230836
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Add missing comparison and selection ops, bit casts, 16-bit memory
operations, gathers, uniform loads, and fill in math holes where
reasonable. Update some names to be a bit more regular.
I think all instructions are implemented in the interpreter,
and many tested. More testing and JITs to follow.
Change-Id: I8cf377e8b72a86ac950e020892ce82b39e9d7277
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/229893
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This first tries to JIT while hoisting all constants,
and if that fails, tries again hoisting no constants.
I figure this is one of those 80/20 deals for how to
handle constant hoisting and register pressure. This
probably mostly moots doing anything fancy like using
memory operands with AVX or lane operands with NEON.
This _doesn't_ moot hoisting the NEON tbl arguments,
which is not yet done here, but probably my next CL.
Change-Id: Id09d5cdddcdb45207bdfc914a5a3128a481a26f3
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/229058
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Even if a JIT ultimately doesn't end up hoisting any values, it's going
to want this information while it decides. Writing it in one place also
ensures we only get it wrong in one place...
I'm no_ extending the lifetime of hoisted instructions here in Builder.
That's something to leave to the backend so they have the flexibility of
which of these values to hoist, if any. If they don't hoist, they'll
need to know when the value dies.
Moving this information back here lets the test expectation goldens
reflect the hoist bit again too. Kind of nice.
Change-Id: Ib165ca898a97c1d822cb28fe24f15bae4d570a17
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/229024
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
We can always move data around so that an FMA is possible using no more
registers than we would otherwise, and on x86, evne using no more
instructions.
The basic idea here is that if we can't reuse one of the inputs to
destructively host the FMA instruction, the next best thing is to copy
one of the arguments into tmp() and accumulate the FMA there.
Once the FMA has happened, we just need to copy that result to dst().
We can of course skip that copy if dst() == tmp(). On x86 we never need
that copy; dst() and tmp() are picked using the same logic except that
dst may alias one of its inputs, and we only fall into this case after
we've already found it doesn't. So we can just assert dst() == tmp()
rather than check it like we do on ARM.
It's subtle, but I think sound.
I'm using logical-or to copy registers around. This is a little lazy,
but maybe not as lazy as it looks: on ARM that is _the_ way to copy
registers. There's a vmovdqa instruction I could use on x86, TBD.
All paths through this new code were being exercised on ARM, but we
didn't have anything hitting the tmp case on x86, so I've added a new
unit test that hits the corner cases of both implementations.
Change-Id: I5422414fc50c64d491b4933b4b580b784596f291
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228630
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
We have pack(x,y,imm) = x | (y<<imm) assuming (x & (y<<imm)) == 0.
If we can destroy x, sli (shift-left-insert) lets us implement that
as x |= y << imm. This happens quite often, so you'll see sequences
of pack that used to look like this
shl v4.4s, v2.4s, #8
orr v1.16b, v4.16b, v1.16b
shl v2.4s, v0.4s, #8
orr v0.16b, v2.16b, v3.16b
shl v2.4s, v0.4s, #16
orr v0.16b, v2.16b, v1.16b
now look like this
sli v1.4s, v2.4s, #8
sli v3.4s, v0.4s, #8
sli v1.4s, v3.4s, #16
We can do this thanks to the new simultaneous register assignment
and instruction selection I added. We used to never hit this case.
Change-Id: I75fa3defc1afd38779b3993887ca302a0885c5b1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228611
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Trying to keep most of the structural parts shared between x86_64 and
aarch64. Not sure if this will stay factored like this long-term, but
the last version felt like there was a bit too much redundancy, and I
don't want to write things like register management more often than have
to.
Change-Id: Ieeb21f433715a730c41c85d657c5b33fa4702696
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228608
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
- get rid of variadic Assembler::byte()... not used very often
- rename Assembler::byte(ptr,n) to bytes()
- align with 0 bytes, get rid of nop()
Change-Id: I7564d3bad00e3f0d1c7a80153c445966914fccf0
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228601
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This is a reland of 558b639225
PS2... oh, right, not everything supports AVX2.
Original change's description:
> more JIT refactoring
>
> This re-enables AVX2 JIT with simultaneous register assignment and
> instruction selection. You can see it working in a very basic way in
> how we choose instructions and registers for Op::mad_f32.
>
> Constants are still broadcast, here inside the loop instead of hoisted.
> I think it'll probably end up best to use constants directly from memory
> (as in vpshufb's masks), falling back to these in-loop broadcasts when
> that can't work.
>
> Change-Id: If17d51b9960f08da3612e51ac04424e996bf83d4
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228366
> Commit-Queue: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>
Cq-Include-Trybots: skia.primary:Test-Mac10.13-Clang-VMware7.1-CPU-AVX-x86_64-Debug-All-NativeFonts
Change-Id: I6f99d275040abe6210a980fc544f7f22c3b85727
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228476
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This reverts commit 558b639225.
Reason for revert: broke some perf bots
Original change's description:
> more JIT refactoring
>
> This re-enables AVX2 JIT with simultaneous register assignment and
> instruction selection. You can see it working in a very basic way in
> how we choose instructions and registers for Op::mad_f32.
>
> Constants are still broadcast, here inside the loop instead of hoisted.
> I think it'll probably end up best to use constants directly from memory
> (as in vpshufb's masks), falling back to these in-loop broadcasts when
> that can't work.
>
> Change-Id: If17d51b9960f08da3612e51ac04424e996bf83d4
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228366
> Commit-Queue: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>
TBR=mtklein@google.com
Change-Id: Id6cd5acd873499bb394009489d77e7636ecbc9c6
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228462
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This re-enables AVX2 JIT with simultaneous register assignment and
instruction selection. You can see it working in a very basic way in
how we choose instructions and registers for Op::mad_f32.
Constants are still broadcast, here inside the loop instead of hoisted.
I think it'll probably end up best to use constants directly from memory
(as in vpshufb's masks), falling back to these in-loop broadcasts when
that can't work.
Change-Id: If17d51b9960f08da3612e51ac04424e996bf83d4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228366
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
I find myself passing around parallel vectors of Builder::Instructions
and deaths so often that it just makes more sense practically to store
them together. It's a little awkward that the values are only useful
after calling done(), but I can live with that.
Get a little more careful about mutation, passing Builder::Instructions
by const&. Instead of extending lifetimes of live hoisted
instructions, just check for them in maybe_recycle_register() instead.
Change-Id: I1cb9e25c1a7c46a250c2271334821be8535353bf
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228367
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
I'm slowly refactoring my way to where hoisting and register assignment
are done in backend-specific ways, but this liveness analysis is always
going to be useful for each backend.
Use deaths() to restore friendly ☠️ dead code markers in test dumps.
Change-Id: I3ab94665bbbbf0788b0b27e00d644eba927dff47
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228113
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
Add Program::dropJIT() to allow us to proactively drop
any JIT code forcing fallback on the interpreter,
and use it to test both on JIT-supported platforms.
Other platforms will just test the interpreter twice.
Change-Id: I607d00ef3c648e66a0b3a1374b11aa82dbfff70c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/227424
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Arg strides are the reason JIT happens lazily in Program::eval() today
instead of proactively in Builder::done() or Program's constructor. It
also just really doesn't make sense to delay this information... it's
not like you can change it up sanely between calls to eval().
The argument index now comes implicitly from the order of calling arg().
This may seem logically independent, but it prevents a weird situation
where you could use the same argument index twice with different
strides... not sure what that would mean.
Change-Id: I0f5d46e94a1ca112a72675c5492f17c0dd825ce0
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/227390
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Basically the same deal as aarch64:
- a bunch of instructions to rewrite control
flow to be two loops, body and tail
- a bunch of instructions to support scalar
loads and stores in the tail
We can now remove the JIT::mask field.
I've removed the SkUNREACHABLE I'd put in for the ARM code... as
written the interpreter is still reachable by the loser if two threads
race to JIT the program. Medium term I plan to move JIT compilation to
a more proactive time, eliminating the need for the lock and letting the
interpreter become truly unreachable.
I had a little bit of a false start with what instructions to use for
scalar load8 and store8, first starting with instructions that loaded
via GP registers, then remembering vpinsrb and vpextrb can take a memory
argument, loading into xmm directly. I've left the first instructions I
used in the file, still implemented but only used from the unit tests.
They're pretty common and will probably be useful some day.
Change-Id: I471b13026af4b1c6e861a53159f9df5f0285447c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/227178
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
I had been setting the REX R bit to select high registers,
but you actually set the B bit. Don't know how I got that
wrong before... the leading byte should be 49 not 4c.
$ cat test.s
foo:
addq $7, %r8
$ clang -c test.s && objdump -d test.o
0000000000000000 <foo>:
0: 49 83 c0 07 add $0x7,%r8
Change-Id: I039e1c4f4ea20523a1e2cc9bcf5f6d9321a6223b
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/227177
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
This turned out to be quite an easy transformation
with yesterday's work already done. No codegen changes.
Change-Id: Ife19ab7731514c54cfed963a6d2e9b1ec2246997
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/227137
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Support forward references in Label.
In addition to tracking the current Label offset (used for
backward references essentially just the same as before this CL)
we also store a list of instructions that refer to each Label.
When a Label moves, each instruction gets a new displacement.
To make this a little easier, remove the 8-bit jump form on x86...
this way all x86 displacements are 32-bit and and all ARM 19-bit.
For now only cbz() supports this, just to start somewhere.
More to do but it's worth an early design review.
Change-Id: I23d2bcd7742965ab694ae4828f53409cb9fc807f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/226937
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This adds a bunch of instructions we'll need to handle the N < 4 tail
within the JIT code on ARM.
- ldrb/strb are 1-byte load and stores
- sub subtracts without setting flags
- cmp just sets flags (actually just subs with an xzr destination)
- add b and b.lt, just like b.ne
- cbz and cbnz... we only need cbz but I accidentally did cbnz first
Once I add support for forward jumps, we'll be able to use these
instructions to restructure the loop to
entry:
hoisted setup
loop:
if N < 4, jump tail (cmp N,#4; b.lt tail)
... handle 4 values ...
jump loop (b loop)
tail:
if N == 0, jump end (cbz N, end)
... handle 1 value ...
jump tail (b tail)
end:
ret
Change-Id: I62d2d190f670f758197a25d99dfde13362189993
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/226828
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Noticed we were only dumping the final register
programs for the integer code. Might as well also
track the value programs.
Change-Id: I417c5c655b632691557bbbb136dcbd3f3167af9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/225324
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This is test-only code only used by SkVMTest.cpp,
so it can live there. This cuts the dependency
of SkVM on SkStream and co.
Change-Id: I7695e527b2d16e4485f8c5f4cd39bb8300e9221d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/225321
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Some small refactoring to common up redundant opcode building.
Oddly, I think I've got better codegen than what Clang would do here.
Clang doesn't generate uxtl-based code to unpack 8-bit to 32-bit,
instead preferring to load each byte one at a time and insert them one
at a time.
Me:
ldr s0, [x0]
uxtl v0.8h, v0.8b
uxtl v0.4s, v0.8h
Clang:
ldrb w8, [x0]
ldrb w9, [x0, #1]
ldrb w10, [x0, #2]
ldrb w11, [x0, #3]
fmov s0, w8
mov v0.s[1], w9
mov v0.s[2], w10
mov v0.s[3], w11
Change-Id: I0fdf5c6cdcde6a4eb9290936284fd3ffcb2159f6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/224821
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
So far this is just as easy as I had hoped.
Change-Id: I5f69a900b32d9bf70156b55e334233d7376b820f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/223340
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Instead of allocating into a std::vector, we do one quick first pass to
measure how much memory we need to allocate, mmap enough pages for that,
then another real writing pass.
This cuts a microsecond or so off the profile. There's another
microsecond left to cut if we could eliminate that first measuring pass,
but I'm no longer sure it's easy to come up with a good upper limit on
the program size now that I'm thinking about the data part of the
program as well.
vpshufb is our current max instruction at 9 bytes of code, but that also
implies another 32 bytes of control data. I'm not sure I feel very
clever allocating 41 * |instructions| bytes to be conservatively safe...
it seems like ridiculous overkill.
Ultimately I found it easier to just measure twice, cut once.
Change-Id: I16ccdafbc789711837b41b3d5a557808798eb1b4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/223305
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
I was just reading the ARM docs and realized that their BIC ("BIt
Clear") is the same as SSE's ANDN ("AND Not") instruction. It's kind of
a neat little tool to have laying around... comes up more than you'd
think, and it's sometimes the clearest way to express what you're doing,
as in the changed program here where the comment is "mask away the low
bits". That's a bit_clear with a mask for what you want to clear away!
And the real reason to write this up is that I want to have a CL to
point to that shows how to add an instruction top to bottom.
Change-Id: I99690ed9c1009427b3986955e7ae6264de4d215c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/223120
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Reviewed-by: Mike Reed <reed@google.com>
For now, disable the vpmovusdb AVX-512 instruction, using the compound
AVX2 fallback instead. I need to learn how to encode EVEX prefixes
before we can use that, and it's not very important.
That's everything! We're fully in control now, and should be able to
run this on any x86-64 Linux or Mac. And we can relax some of the
defined(SKVM_JIT) guards so that, e.g., we can unit test Assembler even
on all platforms.
Stifle some warnings about ~bool by ~(int)bool.
Would like to enable when is_mac too but can't seem to get past
(bogus?) thread annotation on the bots. My local Mac is fine. :/
Change-Id: If00bdd97ebd9684ed109933e2fa70c5e6f6ea339
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/222631
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
This shows off a little how easy backwards-only labels are.
The rip == rbp + Mod::Indirect convention isn't something
you'd be able to guess without just looking at the docs.
I'm not actually sure if you can only use rbp or also r13,
but LLVM seems to always do the equivalent of rbp... might
just be that high bit in VEX is ignored: they're registers
5 and 13, 8 apart, only distinguished by that bit.
Convenienly RIP addressing is always 32-bit, so there's
no benefit to spending time checking whether the offset
fits in a byte, though most of our offsets would.
Change-Id: I01b7fb1500667e1bf98490d5144459f92e1b375d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/222857
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
By putting data first in descending alignment then code, we never need
any alignment padding.
This also makes all jumps and ip-relative data loads backward, so
they're really easy to assemble. No need for any sort of deferred
where-does-this-label-mean logic; the label can just be a simple byte
offset established before you need to use it.
Nothing new switched off of Xbyak in this CL, but the rearrangement
makes the rest a lot easier.
The one downside I've found so far is that the disassembly of the
first instruction can get confused into data or other instructions,
e.g.
63: 01 ff add %edi,%edi
65: 00 ff add %bh,%bh
67: 00 00 add %al,(%rax)
69: ff 00 incl (%rax)
6b: ff c4 inc %esp
6d: e2 7d loop ec <skvm-jit-884702985+0xac>
6f: 18 05 eb ff ff ff sbb %al,-0x15(%rip) # 60 <skvm-jit-884702985+0x20>
75: c4 e2 7d 18 0d e6 ff ff ff vbroadcastss -0x1a(%rip),%ymm1 # 64 <skvm-jit-884702985+0x24>
7e: c4 e2 7d 18 15 e1 ff ff ff vbroadcastss -0x1f(%rip),%ymm2 # 68 <skvm-jit-884702985+0x28>
There are 3 vbroadcastss instructions here, each starting with c4 e2 7d
18, but the first has been disassembled as if its c4 were part of the
last data entry (0xff00ff00) as inc %esp.
Probably not a big deal for now, particularly since those vbroadcastss
are all outside the loop and never show up on a profile. If it gets too
confusing I think we can dump the programs starting from the beginning
of the code instead of from the data; we won't be able to inspect the
data, but everything should disassemble perfectly.
Change-Id: I0cc864359fd0740fc026070eaf2b6cb130783a57
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/222574
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
The encoding kind of all goes through the same paths,
as the three argument instructions, but like the nursery
rhyme when there are only two they kind of all roll over
and the op-extension hops into the bed.
vpermq is the first place we need to set the W bit
to indicate a 64-bit lane operation, so a little
minimal plumbing for that. It takes its arguments
a little differently too, passing dst where you'd
expect, the source where we'd pass y, and requiring
us to pass literal 0000 for the vvvv bits in VEX
(inverted as normal to literal 1111).
Change-Id: I91a4cd1b316eb908992631ce8b2cb3c62078e8c6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/222565
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
- 32x8 i32 add,sub,mul
- add I32_Naive bench/test builder to get better i32 mul coverage
- minor refactoring all over
Change-Id: I13cc19ff37a2da0bcff289ba51baac08f456d6c5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/222485
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>