Commit Graph

190 Commits

Author SHA1 Message Date
Mike Klein
86975cd168 Revert "gather8/16 JIT support"
This reverts commit 54659e51bc.

Reason for revert: ASAN

Original change's description:
> gather8/16 JIT support
> 
> The basic strategy is one at a time, inserting 8- or 16-bit values
> into an Xmm register, then expanding to 32-bit in a Ymm at the end
> using vpmovzx{b,w}d instructions.
> 
> Somewhat annoyingly we can only pull indices from an Xmm register,
> so we grab the first four then shift down the top before the rest.
> 
> Added a unit test to get coverage where the indices are reused and
> not consumed directly by the gather instruction.  It's an important
> case, needing to find another register for accum that can't just be
> dst(), but there's no natural coverage of that anywhere.
> 
> Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com,herb@google.com

Change-Id: I912273e6ffc9258537ba806951a5964be0218d58
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284471
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-20 15:27:49 +00:00
Mike Klein
54659e51bc gather8/16 JIT support
The basic strategy is one at a time, inserting 8- or 16-bit values
into an Xmm register, then expanding to 32-bit in a Ymm at the end
using vpmovzx{b,w}d instructions.

Somewhat annoyingly we can only pull indices from an Xmm register,
so we grab the first four then shift down the top before the rest.

Added a unit test to get coverage where the indices are reused and
not consumed directly by the gather instruction.  It's an important
case, needing to find another register for accum that can't just be
dst(), but there's no natural coverage of that anywhere.

Change-Id: I8189ead2364060f10537a2f9364d63338a7e596f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284311
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-20 15:19:37 +00:00
Mike Klein
b8e041e5f5 refactor arm instructions
Funnel the ARM instructions through two op() helpers, one for 3-arg
vector instructions, another for all others (0,1,2 arg, optional imm).

More consistent use of (immN & N_mask) to make things clearer.

Add missing imm12 offset to load and store instructions, with tests.
Notice they're in element counts, so we can go up to 4096 16-byte stack
entries, not 256 entries like you might think.

Change-Id: I99a3ad30b7b0926f93da671f00d89759934e65b4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284255
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-17 17:12:13 +00:00
Mike Klein
48e782486f set up stack frames on aarch64
Going to be easier to work on stack/register things if we
don't have to keep thinking of aarch64 as a special case.

This just sets up the frames, will follow up with JITMode::Stack.

Change-Id: Ic0df4c5deb9c7d55eb73a62e4b6b1c9919996974
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284243
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-17 16:44:53 +00:00
Mike Klein
c15c936c3e GP64 Operand conversion
Move all the non-vector instructions together,
and convert them to use Operand where possible.

In general that can be any of
   - (Operand, imm)
   - (Operand, GP64)
   - (GP64, Operand)
and that means there are two ways to encode (GP64,GP64)
instructions, so there's a disambiguator added.

Our measure of sucess is eliminating calls to rex()
except from our one helper, and so far, so good.

I haven't seen a need for Label Operands yet, and they're
only useful as (GP64, Operand) style arguments (can't
really be destinations in read-only memory) but we could
add support pretty easily if we find the need.

Tweak one test to avoid int/pointer ambiguity about 0.

Changed some of the instructions to always use a REX
prefix just to make it easier to funnel everything
through one place.  movzbl -> movzbq, etc.

Change-Id: I606f94e76e0ef8f491409f23748f5c8dcb607491
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/284023
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-16 19:39:11 +00:00
Mike Klein
8390f2ead6 lots more refactoring
Rename YmmOperand to Operand, focusing on that side of things for
now.  And delete unused GP64Operand... might not need to return.

Big refactor around W and L bits and the helper op() functions.
Lots more is now funneled through a single core op() function.

Support Xmm and GP64 (direct moves) as Operands too.

As a rule of thumb I measured my progress by counting vex() calls.
Ideally we call it only in that centralized op().

I think I got as close as we can get, with only vgatherdps calling vex()
itself.  Given its weird encoding, there's no good way to work
vgatherdps into the abstraction.  It's close to Mem{base,0,index,scale},
but the index is a Ymm register, and there isn't any corresponding
special cases for it like there is normally for rsp in SIB.

Change-Id: I48e4583293e1df386a18d37ad54197016ce13251
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283806
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-04-16 16:55:32 +00:00
Mike Klein
edc2dacb3a convert load_store / stack_load_store to new style
This replaces most vmovups variants with two: load to register from
flexible operand, or store from register to flexible operand.

And upgrade the zero-extending loads too to finish off load_store().

More to come in small steps.

Change-Id: I80645f264ee91662260046c8e0a45ba6d1bf98c6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283753
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-15 22:54:46 +00:00
Mike Klein
9bb88673f5 start on operand refactoring
This introduces Mem, a way of expressing x86 addressing:

   addr = base reg + offset imm + (scale imm * index reg)

using the usual x86 convention of index = rsp to indicate no index.

And then, this introduces GP64Operand and YmmOperand, which are
generalizations like YmmOrLabel that fold over all the types of
arguments available at that position.  (YmmOperand replaces YmmOrLabel).

There's still much to do, but I've started by generalizing most
of the Ymm instructions to take YmmOperand, and added some new
unit tests for vmovdqa to make sure all the various modes work.

Change-Id: Ie6cc1186310ff39c52a2a061431a91d10816c98a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283344
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-15 20:41:09 +00:00
Mike Reed
1b84ef2b50 [skvm] approx_atan2
I propose landing this, but then pause on extending math functions until
we develop a clearer migration story for sksl.

Change-Id: Id42ec37071da058e6e7809abe1ed0570d48df8e7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283229
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-04-13 22:31:40 +00:00
Mike Klein
45d9cc86b3 remove i16x2 ops
These are neat but mostly just a distraction for now.
I've left all the assembly in place and unit tested
to make putting these back easy when we want to.

Change-Id: Id2bd05eca363baf9c4e31125ee79e722ded54cb7
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283307
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-04-13 19:08:11 +00:00
Mike Klein
cb5110443f impl uniform16
One new instruction movzwl needed.

Change-Id: Ic70ba34d667eb6d570aeca88c4243e0c3309525f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283305
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-13 19:04:11 +00:00
Mike Klein
10fc1e66a9 skvm unit test cleanup
- let skvm tell us if FMAs are supported
  - unguard previously LLVM-only tests
  - simplify testing JIT and interpreter

We're getting close enough to always being able to JIT that carefully
marking what JITs and what doesn't is more annoying than helpful.
Now just test the JIT if present, and always test the interpreter.

Change-Id: I83762b38e0773ccaee795ae0fc9907e86628d73e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283275
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-04-13 18:48:01 +00:00
Mike Reed
d468a1619a [skvm] approx_[asin,acos,atan]
Starting on atan2, but there is a lot of quadrant clean-up, so will
do that in a separate CL.

Change-Id: Ie1e70051a6ecb19a2e521b56ed09796e8e745276
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/283016
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-04-13 12:17:02 +00:00
Mike Reed
801ba0d606 approx_tan for skvm
Change-Id: I22c120db2535929bd20df3068cca1aecc57ae746
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282744
Reviewed-by: Brian Osman <brianosman@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-04-10 17:09:17 +00:00
Mike Klein
210288fdcd add SkVM_min_max unit test
Tests min() / max() float behavior fairly exhaustively.

We sometimes specialize into min_f32_imm and max_f32_imm, so it's
important to test with constant values as each argument to cover that
specialization, and to test with both as non-constant values to cover
when that specialization does not apply.

Change-Id: Ib021fd5a6d322058af2f504048b9ed02d0510732
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282315
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-04-08 19:05:36 +00:00
Mike Klein
5e9f0ee13f add and test stack load/store
Change-Id: Ie0d29e31bd8c156ecd46cd658b5a4c53d8d2e11d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282115
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-07 21:15:47 +00:00
Mike Reed
82ff25e02c approximate sine for skvm
Change-Id: I90f12cb305ff8daf64b07e5f47bb3a158df95bee
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/282120
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-04-07 18:32:17 +00:00
Mike Klein
4067a9429a the return of bit_clear
bit_clear is at least useful as a special case for select(),
which helps with code readability.

Add is_NaN() and use these all together in sweep gradient.

Change-Id: I57a54f8956f85e0db0662b33f8446b8dc7342d8d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281685
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-05 16:44:16 +00:00
Mike Klein
aa68109a59 special-case overhaul
- new this-> convention: never use it when calling common public
    Builder methods like splat(), bit_and(), etc like you'd see in
    normal user code, but always use it when calling private methods
    like this->push(), this->isImm(), this->allImm().

  - use c++17 if-statements to scope this->allImm() variables tighter.

  - check for x.id == y.id cases where applicable, including a tweak
    to min() and max() to make them able to hit the special case.

  - add special cases for I32 +,-,*, and remove an old unimportant
    unit test that assumed we didn't fold these.

  - add special cases for select(), and use select() in a few more
    places where it's clearer and now just as efficient.

Change-Id: Idaac9250ac5a95a48d33eeba1cc4380c8c91629d
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281678
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-05 15:41:36 +00:00
Mike Klein
cca2acfb77 remove little-used bit_clear() and bytes()
bit_clear() is just another bit_and(),
and bytes() is a way of expression pshufb
that we never really use (yet).

Can always add them back later, but there's
some extra complexity to think about for each
that I'd like to not think about now:

  - common sub-expression elimination between bit_and and bit_clear
  - large constant management JIT'ing bytes

Change-Id: I3a54afa963231fec1d5de949acc647e3430ed0d8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281557
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-03 18:16:54 +00:00
Mike Klein
5b701e127c pass around programs by value
It's clearer to see the flow of data this way and to read each pass'
implementation without all the pointer indirection, and move semantics
should let this be just as efficient.

Change-Id: I1ac211fbe54bec37de6d126eec0c211573c2a568
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281218
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-02 16:44:06 +00:00
Mike Klein
b7d87903df pull out schedule, finalize
This finishes up the main refactoring.
Still some follow-ups I want to try.

I got tired of typing usage.users(id) so I converted that to operator[],
which I think is clear for now.  If we add more methods that don't refer
to the users, we can undo?

Change-Id: I0ac563cfb1899f7a3f8b2cb6d50ca1646dd05071
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281216
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-02 16:37:29 +00:00
Mike Klein
7542ab5e5b reframe liveness_analysis as eliminate_dead_code
Seems nicer to keep encapsulated in a program->program pass
so nothing upstream of it has to think about liveness.

I will be circling back around to profiling the cost of these
tempoaries, copies, etc.  I just want to start writing them as
if cost were no object first.

Change-Id: I1d1187b521fbe963e720e0d8de90316a549f7797
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281182
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-04-02 14:44:36 +00:00
Mike Klein
4f90a8e399 Uses refactoring
Instead of copying fIndex and marching it forward,
we can tick down our existing uses counts backward,
saving one temporary std::vector.

Our implementation does guarantee the Instructions
returned by users() are sorted, so let's lean into
that... that means we can find the death time of any
instruction simply by looking at users().back()
(if there are any, of course).

Everything else is names and formatting, the biggest
being renaming Uses -> Usage.  There's enough mention
of "users" and "uses" contrasting with each other that
I think it makes sense for the type to have the nice
middle-ground neutral name Usage, reflecting the arrow
and not which way we're thinking about it pointing.

Change-Id: I32ea9af6eb6430a162bee6da4810a599e8ed0dfd
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/281003
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-04-01 18:18:13 +00:00
Herb Derby
f20400e2a1 Introduce Liveness and Uses into existing scheduler
Liveness tracks all the live instructions in the instruction stream.
Uses maps this value to instructions that use it.

Uses is overkill for the current schedule, but will be needed for
spilling.

Change-Id: Id20b7b7a90901e156d323bb612c5908f91405967
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/277744
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-31 17:53:07 +00:00
Mike Reed
bcb46c06c7 Add approx_pow/log2/pow2 to SkVM builder
... in support of programs for colorspacexforms

Change-Id: I72ace09f10511ef8994038a4af3feab8bc1a299e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/278466
Commit-Queue: Mike Reed <reed@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-23 22:17:04 +00:00
Mike Reed
f5ff4c25a0 add loadF() and storeF() helpers to Builder
Change-Id: I5fe1ca090868cfb9fa930753232e9cbb42737d9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/278473
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
2020-03-23 19:25:43 +00:00
Mike Klein
5caf7dee25 restore Op::round
While I think trunc(mad(x, scale, 0.5)) is fine for doing our float
to fixed point conversions, round(mul(x, scale)) was kind of better
all around:

   - better rounding than +0.5 and trunc
   - faster when mad() is not an fma
   - often now no need to use the constant 0.5f or have it in a register
   - allows the mul() in to_unorm to use mul_f32_imm

Those last two points are key... this actually frees up 2 registers in
the x86 JIT when using to_unorm().

So I think maybe we can resurrect round and still guarantee our desired
intra-machine stability by committing to using instructions that follow
the current rounding mode, which is what [v]cvtps2dq inextricably uses.

Left some notes on the ARM impl... we're rounding to nearest even there,
which is probably the current mode anyway, but to be more correct we
need a slightly longer impl that rounds float->float then "truncates".
Unsure whether it matters in practice.  Same deal in the unit test that
I added back, now testing negative and 0.5 cases too. The expectations
assume the current mode is nearest even.

I had the idea to resurrect this when I was looking at adding _imm Ops
for fma_f32.  I noticed that the y and z arguments to an fma_f32 were by
far most likely to be constants, and when they are, they're by far likely
to both be constants, e.g. 255.0f & 0.5f from to_unorm(8,...).

llvm disassembly for SkVM_round unit test looks good:

~ $ llc -mcpu=haswell /tmp/skvm-jit-1231521224.bc -o -
	.section	__TEXT,__text,regular,pure_instructions
	.macosx_version_min 10, 15
	.globl	"_skvm-jit-1231521224"  ## -- Begin function skvm-jit-1231521224
	.p2align	4, 0x90
"_skvm-jit-1231521224":                 ## @skvm-jit-1231521224
	.cfi_startproc
	cmpl	$8, %edi
	jl	LBB0_3
	.p2align	4, 0x90
LBB0_2:                                 ## %loopK
                                        ## =>This Inner Loop Header: Depth=1
	vcvtps2dq	(%rsi), %ymm0
	vmovupd	%ymm0, (%rdx)
	addl	$-8, %edi
	addq	$32, %rsi
	addq	$32, %rdx
	cmpl	$8, %edi
	jge	LBB0_2
LBB0_3:                                 ## %hoist1
	xorl	%eax, %eax
	testl	%edi, %edi
	jle	LBB0_6
	.p2align	4, 0x90
LBB0_5:                                 ## %loop1
                                        ## =>This Inner Loop Header: Depth=1
	vcvtss2si	(%rsi,%rax), %ecx
	movl	%ecx, (%rdx,%rax)
	decl	%edi
	addq	$4, %rax
	testl	%edi, %edi
	jg	LBB0_5
LBB0_6:                                 ## %leave
	vzeroupper
	retq
	.cfi_endproc
                                        ## -- End function

Change-Id: Ib59eb3fd8a6805397850d93226c6c6d37cc3ab84
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/276738
Auto-Submit: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-03-12 21:10:34 +00:00
Mike Klein
7c0332cd35 re-enable fnma
- hook up fmls.4s as fnma_f32
 - add fneg.4s
 - use fneg.4s + fmls.4s to impl fms_f32
 - more tests to exercise these

Change-Id: I60173a5e4618ab968a9361e15334a1d63c001372
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/275412
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-03-05 21:58:07 +00:00
Mike Klein
238105b50c skip dump checks on machines w/o FMAs
They'll never see fma_f32 ops.

Change-Id: I39371606c673fb76bdcbbe08c1b25308675f8f2c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/275151
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-04 23:21:02 +00:00
Mike Klein
cb50b117e3 get rid of troublesome Op::round
We really only need to_unorm(),
and that's fine with trunc(mad(x, scale, 0.5)).

Change-Id: I1561c678501963a9ae53c22994fc906159fc7199
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/275075
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-03-04 22:26:01 +00:00
Herb Derby
c02a41f0e6 SkVM implement min max
Change-Id: I225e8f7395e58a4ca3c1c151d8711796e6a56939
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274185
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-02-28 21:26:15 +00:00
Mike Klein
5a8404c93e sqrt
- test sqrt
  - impl. sqrt for llvm

Change-Id: I38a06ee57bf4d50e7d068321ab765ede3d1d73bc
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274183
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-02-28 20:55:06 +00:00
Mike Klein
14548b94c3 index
- test index
  - impl. index in llvm
  - convert to loop counter from uint64_t -> int32_t
    to match how we use it in other backends

Change-Id: Iee371d67eddaace068906b861292eb5ed3d74c95
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274135
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-02-28 20:33:57 +00:00
Herb Derby
fb4ff8d6cc SkVM round test
Change-Id: I4226393275a11be3babe21b7f8461767c5b55f23
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274127
Commit-Queue: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2020-02-28 20:11:06 +00:00
Mike Klein
e96207a9e3 i16x2 ops
SE(val) -> S(dst_type, val) to make this work.

Change-Id: Icf42f706b2e7761db8ce83f1e1ef95c288bfecf4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274120
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-02-28 20:03:46 +00:00
Mike Klein
22c007da34 select + stores
This is enough for another swath of tests.

Change-Id: Ida43fa2ee2ebd8e6086923fb9fafef8f646d0a93
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274074
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
2020-02-28 19:32:42 +00:00
Herb Derby
5c5bd1a637 Add comparisons (eq|neq|gt|gte)(i32|f32)
Change-Id: Ic53758162507d769548953001bd761e84d717322
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/274064
Reviewed-by: Mike Klein <mtklein@google.com>
2020-02-28 17:17:45 +00:00
Mike Klein
11efa18eca impl load32
This means we can write a memset32 (load32 -> store32),
tested explicitly with the new unit test.

Slightly changes to the type protocol,
  - load and splat now generate scalars or vectors
    depending on how `scalar` is set
  - store should no longer have to pay attention to `scalar`;
    it's input values will already be the right size

Clean up some of the type declarations where we don't
actually need the subclass types, holding llvm::Type* instead.
This makes using ?: easier.

Change-Id: I2f98701ebdeead0513d355b2666b024794b90193
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/273781
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-27 18:50:05 +00:00
Mike Klein
7b3999edcb convert to phi nodes
Convert our n+args stack homes to phi nodes,
essentially performing mem2reg ourselves,
eliminating the need for it at runtime.

Also, use b.getInt64(k) to create integer constants.

Also, print verifyModule() errors to stdout (instead of nowhere).

Also, update unit test to make sure we don't run off the end.

Bitcode still looks good:
    define void @skvm-jit-211960346(i64, i8*) {
    enter:
      br label %testK

    testK:                                            ; preds = %loopK, %enter
      %2 = phi i64 [ %0, %enter ], [ %6, %loopK ]
      %3 = phi i8* [ %1, %enter ], [ %7, %loopK ]
      %4 = icmp uge i64 %2, 16
      br i1 %4, label %loopK, label %test1

    loopK:                                            ; preds = %testK
      %5 = bitcast i8* %3 to <16 x i32>*
      store <16 x i32> <i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42, i32 42>, <16 x i32>* %5, align 1
      %6 = sub i64 %2, 16
      %7 = getelementptr i8, i8* %3, i64 64
      br label %testK

    test1:                                            ; preds = %loop1, %testK
      %8 = phi i64 [ %2, %testK ], [ %12, %loop1 ]
      %9 = phi i8* [ %3, %testK ], [ %13, %loop1 ]
      %10 = icmp uge i64 %8, 1
      br i1 %10, label %loop1, label %leave

    loop1:                                            ; preds = %test1
      %11 = bitcast i8* %9 to i32*
      store i32 42, i32* %11, align 1
      %12 = sub i64 %8, 1
      %13 = getelementptr i8, i8* %9, i64 4
      br label %test1

    leave:                                            ; preds = %test1
      ret void
    }

and the final assembly looks the same:

    0x10a3f5000: movabsq $0x10a3f6000, %rax        ; imm = 0x10A3F6000
    0x10a3f500a: vbroadcastss (%rax), %zmm0
    0x10a3f5010: cmpq   $0xf, %rdi
    0x10a3f5014: jbe    0x10a3f504d
    0x10a3f5016: nopw   %cs:(%rax,%rax)
    0x10a3f5020: vmovups %zmm0, (%rsi)
    0x10a3f5026: addq   $-0x10, %rdi
    0x10a3f502a: addq   $0x40, %rsi
    0x10a3f502e: cmpq   $0xf, %rdi
    0x10a3f5032: ja     0x10a3f5020
    0x10a3f5034: jmp    0x10a3f504d
    0x10a3f5036: nopw   %cs:(%rax,%rax)
    0x10a3f5040: movl   $0x2a, (%rsi)
    0x10a3f5046: decq   %rdi
    0x10a3f5049: addq   $0x4, %rsi
    0x10a3f504d: testq  %rdi, %rdi
    0x10a3f5050: jne    0x10a3f5040
    0x10a3f5052: vzeroupper
    0x10a3f5055: retq

Change-Id: I12d11c7d5786c4c3df28a49bb3044be10f0770e0
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/273753
Reviewed-by: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-27 17:04:05 +00:00
Mike Klein
b614931bfa basic JIT support?
Codegen is unimpressive so far:

(lldb) dis -s fJITEntry -c 40
    0x10a4dd000: movq   %rdi, -0x8(%rsp)
    0x10a4dd005: movq   %rsi, -0x10(%rsp)
    0x10a4dd00a: movabsq $0x10a4eb000, %rax        ; imm = 0x10A4EB000
    0x10a4dd014: movaps (%rax), %xmm0
    0x10a4dd017: cmpq   $0x7, -0x8(%rsp)
    0x10a4dd01d: jbe    0x10a4dd066
    0x10a4dd01f: nop
    0x10a4dd020: movq   -0x10(%rsp), %rax
    0x10a4dd025: movups %xmm0, 0x10(%rax)
    0x10a4dd029: movups %xmm0, (%rax)
    0x10a4dd02c: addq   $-0x8, -0x8(%rsp)
    0x10a4dd032: addq   $0x20, -0x10(%rsp)
    0x10a4dd038: cmpq   $0x7, -0x8(%rsp)
    0x10a4dd03e: ja     0x10a4dd020
    0x10a4dd040: jmp    0x10a4dd066
    0x10a4dd042: nop
    0x10a4dd043: nop
    0x10a4dd044: nop
    0x10a4dd045: nop
    0x10a4dd046: nop
    0x10a4dd047: nop
    0x10a4dd048: nop
    0x10a4dd049: nop
    0x10a4dd04a: nop
    0x10a4dd04b: nop
    0x10a4dd04c: nop
    0x10a4dd04d: nop
    0x10a4dd04e: nop
    0x10a4dd04f: nop
    0x10a4dd050: movq   -0x10(%rsp), %rax
    0x10a4dd055: movl   $0x2a, (%rax)
    0x10a4dd05b: decq   -0x8(%rsp)
    0x10a4dd060: addq   $0x4, -0x10(%rsp)
    0x10a4dd066: cmpq   $0x0, -0x8(%rsp)
    0x10a4dd06c: jne    0x10a4dd050
    0x10a4dd06e: retq
    ...

Change-Id: I97576e7b6e0696f248853e55de4f045f2b5ce77c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/273518
Reviewed-by: Herb Derby <herb@google.com>
2020-02-27 13:01:37 +00:00
Jarrett Phillips
f9734c39b8 Adding fmls instruction
Change-Id: Ia1752196fd50ade2c3160dc401a36618433420d8
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/270822
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-13 22:55:53 +00:00
Mike Klein
5cdeb390d0 only emit _imm ops when JITing for x86
There are probably ways to make this more efficient by only optimizing
what's necessary (e.g. try JIT first, then interpreter only if it fails)
and some other performance improvements to make, but for now I want to
focus mostly on keeping things simple and correct.

The line between Builder::done() and Program::Program() is particularly
fuzzy and becoming fuzzier here, and I think that'll be something
that'll change eventually.

This makes SkVMTest debug dumps more portable, though perhaps less
useful.  Might kill that feature soon now that SkVM is tested more
thoroughly in unit tests and GMs and bots and such.

Change-Id: Id9ce8daaf8570e5bea8b10f1a80b97f5b33d45dc
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/269941
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-10 19:26:05 +00:00
Mike Klein
ed9b1f1c1e refactor out a middle representation
Kind of brewing a big refactor here, to give me some room between
skvm::Builder and skvm::Program to do optimizations, bakend
specializations and analysis.

As a warmup, I'm trying to split up today's Builder::Instruction into
two forms, first just what the user requested in Builder (this stays
Builder::Instruction) then a new type representing any transformation or
analysis we've done to it (OptimizedInstruction).

Roughly six important optimizations happen in SkVM today, in this order:
   1) constant folding
   2) backend-specific instruction specialization
   3) common sub-expression elimination
   4) reordering + dead code elimination
   5) loop invariant and lifetime analysis
   6) register assignment

At head 1-5 all happen in Builder, and 2 is particularly
awkward to have there (e.g. mul_f32 -> mul_f32_imm).
6 happens in Program per-backend, and that seems healthy.

As of this CL, 1-3 happen in Builder, 4-5 now on this middle
OptimizedInstruction format, and 6 still in Program.

I'd like to get to the point where 1 stays in Builder, 2-5 all happen on
this middle IR, and 6 stays in Program.  That ought to let me do things
like turn mul_f32 -> mul_f32_imm when it's good to and still benefit
from things like common sub-expression elimination and code reordering
happening after that trnasformation.

And then, I hope that's also a good spot to do more complicated
transformations, like lowering gather8 into gather32 plus some fix up
when targeting an x86 JIT but not anywhere else.  Today's Builder is too
early to know whether we should do this or not, and in Program it's
actually kind of awkward to do this sort of thing while also doing
having to do register assignment.  Some middle might be right.

Change-Id: I9c00268a084f07fbab88d05eb441f1957a0d7c67
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/269181
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-06 20:09:53 +00:00
Mike Klein
c66882ebcd Revert "impl gather8/gather16 with gather32"
This reverts commit d4e3b9e8bc.

Reason for revert: will reland with fixes

Original change's description:
> impl gather8/gather16 with gather32
> 
> This is our quick path to JIT small gathers.
> 
> The idea is roughly,
> 
>    const uint32_t* ptr32 = ptr8;
>    uint32_t abcd = ptr32[ix/4];
>    switch (ix & 3) {
>      case 3: return (abcd >> 24)       ;
>      case 2: return (abcd >> 16) & 0xff;
>      case 1: return (abcd >>  8) & 0xff;
>      case 0: return (abcd      ) & 0xff;
>    }
> 
> With the idea that if we may load a given byte,
> we should also be allowed to load the four byte
> aligned word that byte falls within.
> 
> Change-Id: I7fb1085306050c918ccf505f1d2e1e87db3b8c9a
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268381
> Reviewed-by: Herb Derby <herb@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com,herb@google.com,reed@google.com

Change-Id: I48d800edc6517f37e04752c91616b666a5e0f384
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268490
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-03 21:55:22 +00:00
Mike Klein
d4e3b9e8bc impl gather8/gather16 with gather32
This is our quick path to JIT small gathers.

The idea is roughly,

   const uint32_t* ptr32 = ptr8;
   uint32_t abcd = ptr32[ix/4];
   switch (ix & 3) {
     case 3: return (abcd >> 24)       ;
     case 2: return (abcd >> 16) & 0xff;
     case 1: return (abcd >>  8) & 0xff;
     case 0: return (abcd      ) & 0xff;
   }

With the idea that if we may load a given byte,
we should also be allowed to load the four byte
aligned word that byte falls within.

Change-Id: I7fb1085306050c918ccf505f1d2e1e87db3b8c9a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268381
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-03 19:47:41 +00:00
Mike Klein
bc1ce2c0ca test premul/unpremul are no-ops when a==1.0f
Constant propagation means we can always notionally
unpremul and premul at the right points, and if alpha
was already opaque, they'll just drop away.

This has been true, but it's nice to put a test on it.

Change-Id: Iacd2002d9e1a10b73e800a452f377001d5ba3777
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/268336
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-02-03 18:51:05 +00:00
Mike Klein
ba9da466cc radial gradients in skvm
- Add sqrt(), vsqrtps for x86.
- Hook into SkRadialGradient.

Change-Id: I66a4598e30fe16610c59a512f7d962323ee5134a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/267196
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-28 21:04:36 +00:00
Mike Klein
93d3fabcc3 improve scalar gather32
This loads 32 bits instead of gathering 256 in the tail part of loops.

To make it work, add a vmovd with SIB addressing.

I also remembered that the mysterious 0b100 is actually a signal that
the instruction uses SIB addressing, and is usually denoted by `rsp`.

(SIB addressing may be something we'd want to generalize over like we
did recently with YmmOrLabel, but I'll leave that for Future Me.)

Slight rewording where "scratch" is mentioned to keep it focused on
scratch GP registers, not "tmp" ymm registers.  Not a hugely important
distinction but helps when I'm grepping through code.

Change-Id: I39a6ab1a76ea0c103ae7d3ebc97a1b7d4b530e73
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264376
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-01-14 18:24:56 +00:00
Mike Klein
b2b6a99dca impl gather32 for x86
Some TODOs left over to make the scalar
tail case better... as is it issues a
256-bit gather for each 32-bit load!

I added a trimmed down variant of the existing
SkVM_gathers unit test to test just gather32,
covering this new JIT code.

Change-Id: Iabd2e6a61f0213b6d02d222b9f7aec2be000b70b
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264217
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-14 17:24:45 +00:00
Mike Klein
c322f634a3 add movq
This does the equivalent of dst = *(src + off),
which we use to find our gather base pointer in gather32.

Change-Id: I09ca7bfd404d7dce6de454ef1ed4eee78ab29932
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264216
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-01-14 03:02:35 +00:00
Mike Klein
beaa108018 add vgatherdps
A complicated instruction to say the least!

A "fun" wrinkle is that all the ymm registers must be unique!
(And the mask register is cleared by the instruction...)

Still kind of TODO is what that 0b100 r/m in the mod_rm() means.  Every
variant of the instruction I've assembled seems to have it set to 0b100
(e.g. 0x0c or 0x04) but I'd feel better if I knew what it meant.

Change-Id: Ia4ff5f8175bff545e2d10bb2d1b14f49073445a3
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/264116
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2020-01-13 23:51:35 +00:00
Mike Klein
f22faaf254 add vroundps, impl Op::floor on x86
Change-Id: Iad94adda2da74fefb5657d883120f85ad362327e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/263461
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-09 14:19:14 +00:00
Mike Klein
92ca3baba6 JIT today's new _imm ops
- Add YmmOrLabel struct to represent the concept that many
  x86 instructions can take a final argument as either a
  register or memory address, and that they all handle them
  the same way.
- Convert existing overloads like vmulps() to use YmmOrLabel.
- upgrade some other instructions to take YmmOrLabel
- use them to implement today's new _imm ops

This feels like a good spot for implicit constructors, no?

Change-Id: I435028acc3fbfcc16f634cfccc98fe38bbce9d19
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/263207
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-08 23:38:13 +00:00
Mike Klein
a6434a5ef5 refactor bit ops
- Remove extract... it's not going to have any special impl.
  I've left it on skvm::Builder as an inline compound method.
- Add no-op shift short circuits.
- Add immediate ops for bit_{and,or,xor,clear}.

This comes from me noticing that the masks for extract today are always
immediates, and then when I started converting it to be (I32, int shift,
int mask), I realized it might be even better to break it up into its
component pieces.  There's no backend that can do extract any better
than shift-then-mask, so might as well leave it that way so we can
dedup, reorder, and specialize those micro ops.

Will follow up soon to get this all JITing again,
and these can-we-JIT test changes will be reverted.

Change-Id: I0835bcd825e417104ccc7efc79e9a0f2f4897841
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/263217
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2020-01-08 21:20:54 +00:00
Mike Klein
6dbd7ff34a first foray into SkVM image shaders
Basic support for clamp/clamp/nearest/RGBA-8888/premul images.

A couple little changes to make this work:
   - add pixel-center offset to shader (x,y)

   - change the signature of gather??() calls to work
     more naturally with how we let effects build uniforms.
     Instead of gathering directly from one of the program
     arguments, load the gather base pointer off another
     uniforms pointer, just like any other uniform.

   - remove the default argument to uniform??() so that
     they parallel the new gather??() calls more closely.
     There was only one place that was using the default
     and I think it's clearer as an explicit 0 offset.

   - centralize some more helpers onto skvm::Builder so
     we can use the in both SkVMBlitter and SkImageShader.

Some diffs:
   - very, very small color diffs probably due to slightly
     different math converting between byte and float or blending;
   - small sampling coordinate diffs where skvm + SkRP agree,
     and the legacy shaders disagree.  That's fine by me.

Change-Id: I72634e7fed4f13e6cb41b8067104760f392ea3bf
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/262368
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2020-01-06 19:45:53 +00:00
Mike Klein
17e2714c9f skip _imm ops on ARM
These are really designed around x86, so forcing them
on ARM where our existing non-immediate ops work better
is kind of silly.

Change-Id: I6b66ed0b0a71b335becdcb1d67dec471620542b0
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/254440
Reviewed-by: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-20 00:08:05 +00:00
Mike Klein
37be7715fd implement assert_true on ARM
This all comes together as

    uminv tmp, condition
    fmov  gp, tmp
    cbnz  gp, all_true
    brk   0
  all_true:
    ...

The key idea is uminv(vec) will return 0 if any of the inputs are 0,
and non-zero if all of the inputs are non-zero, namely 0xffffffff.

fmov moves that minimum from a vector register to a general purpose
register where we can test it with cbnz, compare and branch if non-zero.
This jumps over the `brk 0` debug trap when all inputs are true.

Change-Id: If5deb77a77f52221d0649e537179743c45eb9cc5
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/254479
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-13 22:09:59 +00:00
Mike Klein
8c1e0effbb sketch out structure for ops with immediates
Lots of x86 instructions can take their right hand side argument from
memory directly rather than a register.  We can use this to avoid the
need to allocate a register for many constants.

The strategy in this CL is one of several I've been stewing over, the
simplest of those strategies I think.  There are some trade offs
particularly on ARM; this naive ARM implementation means we'll load&op
every time, even though the load part of the operation can logically be
hoisted.  From here on I'm going to just briefly enumerate a few other
approaches that allow the optimization on x86 and still allow the
immediate splats to hoist on ARM.

1) don't do it on ARM
A very simple approach is to simply not perform this optimization on
ARM.  ARM has more vector registers than x86, and so register pressure
is lower there.  We're going to end up with splatted constants in
registers anyway, so maybe just let that happen the normal way instead
of some roundabout complicated hack like I'll talk about in 2).  The
only downside in my mind is that this approach would make high-level
program descriptions platform dependent, which isn't so bad, but it's
been nice to be able to compare and diff debug dumps.

2) split Op::splat up
The next less-simple approach to this problem could fix this by
splitting splats into two Ops internally, one inner Op::immediate that
guantees at least the constant is in memory and is compatible with
immediate-aware Ops like mul_f32_imm, and an outer Op::constant that
depends on that Op::immediate and further guarantees that constant has
been broadcast into a register to be compatible with non-immediate-aware
ops like div_f32.  When building a program, immediate-aware ops would
peek for Op::constants as they do today for Op::splats, but instead of
embedding the immediate themselves, they'd replace their dependency with
the inner Op::immediate.

On x86 these new Ops would work just as advertised, with Op::immediate a
runtime no-op, Op::constant the usual vbroadcastss.  On ARM
Op::immediate needs to go all the way and splat out a register to make
the constant compatible with immediate-aware ops, and the Op::constant
becomes a noop now instead.  All this comes together to let the
Op::immediate splat hoist up out of the loop while still feeding
Op::mul_f32_imm and co.  It's a rather complicated approach to solving
this issue, but I might want to explore it just to see how bad it is.

3) do it inside the x86 JIT
The conceptually best approach is to find a way to do this peepholing
only inside the JIT only on x86, avoiding the need for new
Op::mul_f32_imm and co.  ARM and the interpreter don't benefit from this
peephole, so the x86 JIT is the logical owner of this optimization.
Finding a clean way to do this without too much disruption is the least
baked idea I've got here, though I think the most desirable long-term.

Cq-Include-Trybots: skia.primary:Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SK_USE_SKVM_BLITTER,Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Release-All-SK_USE_SKVM_BLITTER
Change-Id: Ie9c6336ed08b6fbeb89acf920a48a319f74f3643
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/254217
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2019-11-12 20:17:55 +00:00
Mike Klein
749eef6b1a implement assert_true on x86
The logic implemented here is roughly

  assert_true(v):
     if any ~v {
         int3()
     }

  in assembly as

  ```
    vptest v, constant 0xffffffff mask
    jc ok
    int3
  ok:
  ```

jc branches if (~v & mask) are all zero, with mask set fully, that's
branch if ~v are all zero, which is to say, v are all ~0, true.  So we
jump over the int3 breakpoint if v are all true.

Cq-Include-Trybots: skia.primary:Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SK_USE_SKVM_BLITTER
Change-Id: Ie0fc1da15b1a0dba00c66af610ccde18f5985f8a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253897
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
2019-11-12 20:07:35 +00:00
Mike Klein
ee5864a170 add int3, vptest, jc
Will use these to implement assert_true on x86.

Change-Id: I9d2595a35518b6971dd8e418b583febd3960c7f6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253896
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2019-11-11 16:33:01 +00:00
Mike Klein
1360117174 add assert_true()
This is an assert that is active in debug mode.  For the moment it only
works in the interpreter, but I plan to follow up with JIT code too.

assert_true() is a data sink like a store() as far as lifetime goes,
though we take care to allow it to be hoisted if its inputs are.  An
assert_true's existence will keep all its inputs alive, and in release
builds where we skip the instruction, those inputs will all drop away
automatically.

Tested locally by forcing the interpreter.  It shouldn't be long before
I have at least x86 JIT asserts working too.

Change-Id: I7aba40d040436a57a6b930790f7b8962bafb1a8c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253756
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-11 15:12:18 +00:00
Mike Klein
6e4aad91c3 rename to_i32 -> trunc, and add round
This plumbs through round but doesn't use it.  I want that change to be
its own CL.  It's nice to have assembler support and the name changes
even if I revert using round.

Change-Id: I6d67ec5c63546069eb7cc1c91599b599bafcda66
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253724
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-08 21:00:51 +00:00
Mike Klein
a53e47fe94 native f32 min/max
No diffs.

Change-Id: Ia0b35c2787e27d74763f21b81072affa6caf1e5a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253720
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Reed <reed@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2019-11-08 20:21:38 +00:00
Mike Klein
81a8d282d3 Reland "hook up float comparisons to x86 JIT"
This is a reland of 12cea8d6c4

Now implementing float comparisons on ARM also.
Only vaguely tricky thing is that x!=y is ~(x==y).

Original change's description:
> hook up float comparisons to x86 JIT
>
> This gets the draws in gm/skvm.cpp all JITing again,
> and in one of the unit tests.
>
> (Everything draws the same of course.)
>
> Change-Id: Iada28690d9df78f9d444ee3765e21beb29239672
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253166
> Auto-Submit: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

Cq-Include-Trybots: skia.primary:Test-Android-Clang-NVIDIA_Shield-CPU-TegraX1-arm64-Debug-All-Android
Change-Id: I771b8a327a958db8a0d509d55863ade935a00035
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253401
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 18:55:14 +00:00
Mike Klein
3f7c865936 avoid the JIT on MSAN builds
JIT code isn't MSAN-instrumented, so we won't see when it uses
uninitialized memory, and we'll not see the writes it makes as properly
initializing memory.  Instead force the interpreter, which should let
MSAN see everything our programs do properly.

This refactors so that SkVM.cpp is the only code to look at whether
SKVM_JIT is defined, and undefines it when built with MSAN.  Added
a simple regression test too.

Cq-Include-Trybots: skia.primary:Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Debug-All-MSAN
Change-Id: Ic7cca2621f84dfba7174127738744d6c68f85f2e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253410
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 18:24:34 +00:00
Mike Klein
297d5a03e6 Revert "hook up float comparisons to x86 JIT"
This reverts commit 12cea8d6c4.

Reason for revert: unit tests failing on ARM... will try again once I have float comparisons implemented for ARM too.

Original change's description:
> hook up float comparisons to x86 JIT
> 
> This gets the draws in gm/skvm.cpp all JITing again,
> and in one of the unit tests.
> 
> (Everything draws the same of course.)
> 
> Change-Id: Iada28690d9df78f9d444ee3765e21beb29239672
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253166
> Auto-Submit: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>
> Commit-Queue: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com,reed@google.com

Change-Id: Ie07e580b4998199338217a27d4fad34c679ffc23
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253399
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 15:35:12 +00:00
Mike Klein
12cea8d6c4 hook up float comparisons to x86 JIT
This gets the draws in gm/skvm.cpp all JITing again,
and in one of the unit tests.

(Everything draws the same of course.)

Change-Id: Iada28690d9df78f9d444ee3765e21beb29239672
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253166
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-11-07 14:36:32 +00:00
Mike Klein
714f8cc3ff add vcmpps
Change-Id: I7a13b759d2cd2c27c107ff4cec0daa15c2cd9edb
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/253131
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2019-11-06 21:22:22 +00:00
Mike Klein
7a13b461e6 x86-64 JIT support for Op::index
This lets shaders that use 'x' JIT on x86.

I started with paddd and {0,-1,-2,-3,...}, which worked fine but on
second thought seemed a bit odd.  I've switched to psubd and
{0,1,2,3,...} but I've left in support for paddd with a memory arg.

gm/skvm.cpp now JITs fully again and continues to draw the same as
the interpreter did.

Simplify embedded data alignment a little... memory operands don't
need full register alignment in AVX like they used to in SSE.  So
just align everything to the vector element size like we do on ARM,
and reorder [splats,bytes_masks,iota] to match the order we declare
and handle them in the code above.

Add unit tests for vpaddd + vpsubd.

Cq-Include-Trybots: skia.primary:Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Debug-All-SK_USE_SKVM_BLITTER
Change-Id: I6b8d060450cca7f437a1d2a597a8a0e0e8d51b33
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/252797
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Reed <reed@google.com>
2019-11-05 17:11:37 +00:00
Mike Klein
d48488b5ea reorder to minimize register pressure
Rewrite program instructions so that each value becomes available as
late as possible, just before it's used by another instruction.  This
reorders blocks of instructions to reduce them number of temporary
registers in flight.

Take this example of the sort of program that we naturally write,
noting the registers needed as we progress down the right:

    src = load32 ...          (1)
    sr = extract src ...      (2)
    sg = extract src ...      (3)
    sb = extract src ...      (4)
    sa = extract src ...      (4, src dies)

    dst = load32 ...          (5)
    dr = extract dst ...      (6)
    dg = extract dst ...      (7)
    db = extract dst ...      (8)
    da = extract dst ...      (8, dst dies)

    r = add sr dr             (7, sr and dr die)
    g = add sg dg             (6, sg and dg die)
    b = add sb db             (5, sb and db die)
    a = add sa da             (4, sa and da die)

    rg   = pack r g ...       (3, r and g die)
    ba   = pack b a ...       (2, b and a die)
    rgba = pack rg ba ...     (1, rg and ba die)
    store32 rgba ...          (0, rgba dies)

That original ordering of the code needs 8 registers (perhaps with a
temporary 9th, but we'll ignore that here).  This CL will rewrite the
program to something more like this by recursively issuing inputs only
once needed:

    src = load32 ...       (1)
    sr  = extract src ...  (2)
    dst = load32 ...       (3)
    dr  = extract dst ...  (4)
     r  = add sr dr        (3, sr and dr die)

    sg  = extract src ...  (4)
    dg  = extract dst ...  (5)
     g  = add sg dg        (4, sg and dg die)

    rg  = pack r g         (3, r and g die)

    sb  = extract src ...  (4)
    db  = extract dst ...  (5)
     b  = add sb db        (4, sb and db die)

    sa  = extract src ...  (4, src dies)
    da  = extract dst ...  (4, dst dies)
     a  = add sa da        (3, sa and da die)

    ba  = pack b a         (2, b and a die)

    rgba = pack rg ba ...  (1, rg and ba die)
    store32 rgba  ...      (0)

That trims 3 registers off the example, just by reordering!
I've added the real version of this example to SkVMTest.cpp.
(Its 6th register comes from holding the 0xff byte mask used
by extract, in case you're curious).

I'll admit it's not exactly easy to work out how this reordering works
without a pen and paper or trial and error.  I've tried to make the
implementation preserve the original program's order as much as makes
sense (i.e. when order is an otherwise arbitrary choice) to keep it
somewhat sane to follow.

This reordering naturally skips dead code, so pour one out for ☠️ .
We lose our cute dead code emoji marker, but on the other hand all code
downstream of Builder::done() can assume every instruction is live.

Change-Id: Iceffcd10fd7465eae51a39ef8eec7a7189766ba2
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/249999
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2019-10-22 21:49:05 +00:00
Mike Klein
97afd2e21c add bsl.16b, cmeq.4s, cmgt.4s
These implement select, eq_i32, lt_i32, and gt_i32 on ARMv8.

Change-Id: Ic36dda1cc425ca91700f9b120594e420ea0f560a
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/248970
Auto-Submit: Mike Klein <mtklein@google.com>
Commit-Queue: Herb Derby <herb@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2019-10-17 01:54:33 +00:00
Mike Klein
0f61c12737 add used_in_loop bit to skvm::Builder::Instruction
Most hoisted values are used in the loop body (and that's really the
whole point of hoisting) but some are just temporaries to help produce
other hoisted values.  This used_in_loop bit helps us distinguish the
two, and lets us recycle registers holding temporary hoisted values not
used in the loop.

The can-we-recycle logic now becomes:
   - is this a real value?
   - is it time for it to die?
   - is it either not hoisted or a hoisted temporary?

The set-death-to-infinity approach for hoisted values is now gone.  That
worked great for hoisted values used inside the loop, but was too
conservative for hoisted temporaries.  This lifetime extension was
preventing us from recycling those registers, pinning enough registers
that we run out and fail to JIT.

Small amounts of refactoring to make this clearer:
   - move the Instruction hash function definition near its operator==
   - rename the two "hoist" variables to "can_hoist" for Instructions
     and "try_hoisting" for the JIT approach
   - add ↟ to mark hoisted temporaries, _really_ hoisted values.

There's some redundancy here between tracking the can_hoist bit, the
used_in_loop bit, and lifetime tracking.  I think it should be true, for
instance, that !can_hoist && !used_in_loop implies an instruction is
dead code.  I plan to continue refactoring lifetime analysis (in
particular reordering instructions to decrease register pressure) so
hopefully by the time I'm done that metadata will shake out a little
crisper.

Change-Id: I6460ca96d1cbec0315bed3c9a0774cd88ab5be26
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/248986
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2019-10-16 18:29:06 +00:00
Mike Klein
b5a30767e0 Reland "mark which SkVM tests should JIT or not"
This is a reland of 52435503e9

with better checks for when we should expect JIT and not.

Original change's description:
> mark which SkVM tests should JIT or not
>
> Most of these tests converted over to test_interpreter_only()
> are failing to JIT because of unimplemented instructions.  No
> bug there, just TODOs.
>
> But SkVM_hoist _should_ be JITting.  A while back I landed a CL
> that messed with value lifetimes that prevents it from JITting.
> Will be using this as a regression test to fix that bug.
>
> Change-Id: Id2034f6548a45ed9aeb9ae3cbb24d389cad7dc60
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/248980
> Commit-Queue: Mike Klein <mtklein@google.com>
> Commit-Queue: Ethan Nicholas <ethannicholas@google.com>
> Auto-Submit: Mike Klein <mtklein@google.com>
> Reviewed-by: Ethan Nicholas <ethannicholas@google.com>
> Reviewed-by: Herb Derby <herb@google.com>

Cq-Include-Trybots: skia.primary:Test-Android-Clang-NVIDIA_Shield-CPU-TegraX1-arm64-Release-All-Android,Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Release-All-SK_CPU_LIMIT_SSE2,Test-Debian9-Clang-GCE-CPU-AVX2-x86_64-Release-All-SK_CPU_LIMIT_SSE41,Test-Mac10.13-Clang-VMware7.1-CPU-AVX-x86_64-Debug-All-NativeFonts,Test-Mac10.14-Clang-VMware7.1-CPU-AVX-x86_64-Debug-All-NativeFonts
Change-Id: Id7bde7e879649e435fa424a9c9d6c51a31afd5e9
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/248990
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-10-16 17:35:06 +00:00
Mike Klein
4e11526e3d Revert "mark which SkVM tests should JIT or not"
This reverts commit 52435503e9.

Reason for revert: lots of bots can't JIT, duh...

Original change's description:
> mark which SkVM tests should JIT or not
> 
> Most of these tests converted over to test_interpreter_only()
> are failing to JIT because of unimplemented instructions.  No
> bug there, just TODOs.
> 
> But SkVM_hoist _should_ be JITting.  A while back I landed a CL
> that messed with value lifetimes that prevents it from JITting.
> Will be using this as a regression test to fix that bug.
> 
> Change-Id: Id2034f6548a45ed9aeb9ae3cbb24d389cad7dc60
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/248980
> Commit-Queue: Mike Klein <mtklein@google.com>
> Commit-Queue: Ethan Nicholas <ethannicholas@google.com>
> Auto-Submit: Mike Klein <mtklein@google.com>
> Reviewed-by: Ethan Nicholas <ethannicholas@google.com>
> Reviewed-by: Herb Derby <herb@google.com>

TBR=mtklein@google.com,herb@google.com,ethannicholas@google.com

Change-Id: Ieea4b06f0d32249e3da56c6810d3c45c2abf2689
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/248989
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-10-16 16:48:58 +00:00
Mike Klein
52435503e9 mark which SkVM tests should JIT or not
Most of these tests converted over to test_interpreter_only()
are failing to JIT because of unimplemented instructions.  No
bug there, just TODOs.

But SkVM_hoist _should_ be JITting.  A while back I landed a CL
that messed with value lifetimes that prevents it from JITting.
Will be using this as a regression test to fix that bug.

Change-Id: Id2034f6548a45ed9aeb9ae3cbb24d389cad7dc60
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/248980
Commit-Queue: Mike Klein <mtklein@google.com>
Commit-Queue: Ethan Nicholas <ethannicholas@google.com>
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Ethan Nicholas <ethannicholas@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2019-10-16 15:45:16 +00:00
Mike Klein
6b4143e11f move skvm debug tools back to core
I happened to have this on when profiling skottie_tool and got curious
why I was seeing the interpreter run and not JIT code.  Mostly this
moves the code in bulk out of SkVMTest.cpp to SkVM.cpp so that code in
SkVM.cpp can call dump() on itself.

Also this CL has the skvm::Program hang onto the original value-based
builder program (in addition to its own interpreter program and JIT
program if we can).  This is entirely so that when JIT bails out I
can have it dump out both the builder and interpreter programs for
more debugging aid.

I'm still going to need more debug tools somewhere to figure out
what the program that needs 17 registers is, and what to do about
it.

Finally, remove skvmtool.  It's annoying to maintain its build
rules, and I don't use it much if ever anymore.

Change-Id: I995d15d04bda79ddfc4d68bda8aaa3b5b9261f08
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/242520
Auto-Submit: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-09-18 19:41:11 +00:00
Mike Klein
f996311003 extend lifetimes for hoisted used in loop
This makes the register recycling checks a bit more
precise.  At head we never recycle a register that's
holding a hoisted value, which is overly conservative.

We really should never recycle a register that's still
needed.  By extending the lifetime of any hoisted value
that's used in the loop, we prevent that, while still
allowing hoisted values that are only used in hoisted
computation to be reused.

This takes just a small tweak in the JIT code (removing
the !hoisted({x,y,z}) checks), and a somewhat larger
refactoring in the interpreter, making both hoisted and
non-hoisted code go through the same recycling register
assignment flow.

There's one diff in the existing cases where we now
reuse a hoisted register, and I've added a second test
just to make sure it's covered explicitly.

Change-Id: I25b37ab1f1fea3042d7fd167529abc8fed1dddff
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/233239
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-08-13 02:08:16 +00:00
Mike Klein
b994412823 select, {eq,lt,gt}_i32 on x86
Add vpblendvb, vpcmpeqd, and vpcmpgtd, to implement select and eq/lt/gt.
I want to think just a touch bit more about neq, lte, and gte.

This is enough to JIT everything SkVMBlitter creates today.

There are 24 possible argument orders to vpblendvb,
so I'm sure I've got them wrong somehow, even with the new test.

Change-Id: I357664b866d8258a2b5438d520f47542ad581c50
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232060
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-08-02 18:28:15 +00:00
Mike Klein
95529e8216 x86 store16
Nothing particularly tricky.  Very much like store8,
but with one fewer shuffle.

This lets some of the 565 blitters JIT!

Change-Id: I853905bda30a0cda89f3fcb5fef1dfe62725063b
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232059
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-08-02 17:47:34 +00:00
Mike Klein
52010b7dd8 x86, load16
Very similar to load8.

The only interesting thing is how different vpinsrw is from vpinsrb (and
vpinsrd and vpinsrq)... different map of instructions entirely.

Change-Id: Ia413b83604dd2d277d59495c5f693f505c35be9f
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232058
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-08-02 16:52:44 +00:00
Mike Klein
94d054b53c x86 uniform8
Add vbroadcastss(Ymm, Xmm) and expand movzbl() to support an offset.

The sequence

    movzbl          (load a byte, zero-extend to 4 byte)
    vmovd           (move that 4 byte value to an xmm)
    vbroadcastss    (broadcast low 4 bytes of xmm to all lanes of ymm)

implements uniform8.

Change-Id: I1d3125920d19dcb3cad5980495310bc95b9dffee
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232057
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-08-02 16:41:44 +00:00
Mike Klein
788967eb37 add vbroadcastss(Ymm, GP64, int)
vbroadcastss with a register argument and immediate offset implements
uniform32.  And this turns on the JIT for some new SkVMBlitter paths
that pass more arguments that I'd previously wired up, so add a few
more.

Change-Id: I66db1286dcdb2c4a4ba7c43f2dc2cd13564d4d34
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/232056
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-08-02 16:07:37 +00:00
Mike Klein
9fdadb949f test a (the) zero-arg program
Just because a program doesn't read from or
write to memory doesn't mean it's pointless.

Oh wait, yes it does.  It shouldn't crash though.

Change-Id: I6a9c26c065831f9598afccce6e0a34a178cbd925
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230839
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-30 19:58:37 +00:00
Mike Klein
d4546d6c4c add vmovdqa(ymm,ymm)
Fills in a little TODO.
Also add assembler unit tests for similar float->int and int->float.

Tested by SkVM_mad, SkVM_madder.

Change-Id: I5334029927fdecb0ff7f5a3b081cf2ce7b23995c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230838
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-30 18:43:35 +00:00
Mike Klein
81d526703c more unit tests
Backfill tests for some uncovered operations.

Mostly functional tests, with some targeted at particular
corners of code generation, and a few assembler unit tests.

This doesn't get up to 100% line coverage, but I think it
covers all the non-failure cases.

Change-Id: Ib701df0c505aa41e3b2fe3cd429447acb294b752
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230837
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-30 17:35:15 +00:00
Mike Klein
5591fdf930 small refactors
Change-Id: I58b52d3e1d05d0834be30e00d991636e227cbf0b
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/230836
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-30 15:43:13 +00:00
Mike Klein
8ac9f4e5b2 flesh out SkVM ops a bit more
Add missing comparison and selection ops, bit casts, 16-bit memory
operations, gathers, uniform loads, and fill in math holes where
reasonable.  Update some names to be a bit more regular.

I think all instructions are implemented in the interpreter,
and many tested.  More testing and JITs to follow.

Change-Id: I8cf377e8b72a86ac950e020892ce82b39e9d7277
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/229893
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-29 20:43:10 +00:00
Mike Klein
f98d0d31c4 let JIT code hoist when possible
This first tries to JIT while hoisting all constants,
and if that fails, tries again hoisting no constants.

I figure this is one of those 80/20 deals for how to
handle constant hoisting and register pressure.  This
probably mostly moots doing anything fancy like using
memory operands with AVX or lane operands with NEON.

This _doesn't_ moot hoisting the NEON tbl arguments,
which is not yet done here, but probably my next CL.

Change-Id: Id09d5cdddcdb45207bdfc914a5a3128a481a26f3
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/229058
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-22 21:06:34 +00:00
Mike Klein
5e533c9e1f move hoist analysis back into Builder
Even if a JIT ultimately doesn't end up hoisting any values, it's going
to want this information while it decides.  Writing it in one place also
ensures we only get it wrong in one place...

I'm no_ extending the lifetime of hoisted instructions here in Builder.
That's something to leave to the backend so they have the flexibility of
which of these values to hoist, if any.  If they don't hoist, they'll
need to know when the value dies.

Moving this information back here lets the test expectation goldens
reflect the hoist bit again too.  Kind of nice.

Change-Id: Ib165ca898a97c1d822cb28fe24f15bae4d570a17
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/229024
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-22 19:34:06 +00:00
Mike Klein
4a13119a60 always fma in mad_f32()
We can always move data around so that an FMA is possible using no more
registers than we would otherwise, and on x86, evne using no more
instructions.

The basic idea here is that if we can't reuse one of the inputs to
destructively host the FMA instruction, the next best thing is to copy
one of the arguments into tmp() and accumulate the FMA there.

Once the FMA has happened, we just need to copy that result to dst().
We can of course skip that copy if dst() == tmp().  On x86 we never need
that copy; dst() and tmp() are picked using the same logic except that
dst may alias one of its inputs, and we only fall into this case after
we've already found it doesn't.  So we can just assert dst() == tmp()
rather than check it like we do on ARM.

It's subtle, but I think sound.

I'm using logical-or to copy registers around.  This is a little lazy,
but maybe not as lazy as it looks: on ARM that is _the_ way to copy
registers.  There's a vmovdqa instruction I could use on x86, TBD.

All paths through this new code were being exercised on ARM, but we
didn't have anything hitting the tmp case on x86, so I've added a new
unit test that hits the corner cases of both implementations.

Change-Id: I5422414fc50c64d491b4933b4b580b784596f291
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228630
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-19 20:12:42 +00:00
Mike Klein
1326749832 add sli.4s, use it in pack sometimes
We have pack(x,y,imm) = x | (y<<imm) assuming (x & (y<<imm)) == 0.

If we can destroy x, sli (shift-left-insert) lets us implement that
as x |= y << imm.  This happens quite often, so you'll see sequences
of pack that used to look like this

	shl	v4.4s, v2.4s, #8
	orr	v1.16b, v4.16b, v1.16b
	shl	v2.4s, v0.4s, #8
	orr	v0.16b, v2.16b, v3.16b
	shl	v2.4s, v0.4s, #16
	orr	v0.16b, v2.16b, v1.16b

now look like this

	sli	v1.4s, v2.4s, #8
	sli	v3.4s, v0.4s, #8
	sli	v1.4s, v3.4s, #16

We can do this thanks to the new simultaneous register assignment
and instruction selection I added.  We used to never hit this case.

Change-Id: I75fa3defc1afd38779b3993887ca302a0885c5b1
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228611
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-19 18:37:12 +00:00
Mike Klein
9e2218a06a restore aarch64 JIT
Trying to keep most of the structural parts shared between x86_64 and
aarch64.  Not sure if this will stay factored like this long-term, but
the last version felt like there was a bit too much redundancy, and I
don't want to write things like register management more often than have
to.

Change-Id: Ieeb21f433715a730c41c85d657c5b33fa4702696
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228608
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-19 18:06:52 +00:00
Mike Klein
237dbb4d87 small cleanups
- get rid of variadic Assembler::byte()... not used very often
  - rename Assembler::byte(ptr,n) to bytes()
  - align with 0 bytes, get rid of nop()

Change-Id: I7564d3bad00e3f0d1c7a80153c445966914fccf0
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228601
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-19 15:36:11 +00:00
Mike Klein
37607d4ccd Reland "more JIT refactoring"
This is a reland of 558b639225

PS2... oh, right, not everything supports AVX2.

Original change's description:
> more JIT refactoring
>
> This re-enables AVX2 JIT with simultaneous register assignment and
> instruction selection.  You can see it working in a very basic way in
> how we choose instructions and registers for Op::mad_f32.
>
> Constants are still broadcast, here inside the loop instead of hoisted.
> I think it'll probably end up best to use constants directly from memory
> (as in vpshufb's masks), falling back to these in-loop broadcasts when
> that can't work.
>
> Change-Id: If17d51b9960f08da3612e51ac04424e996bf83d4
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228366
> Commit-Queue: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>

Cq-Include-Trybots: skia.primary:Test-Mac10.13-Clang-VMware7.1-CPU-AVX-x86_64-Debug-All-NativeFonts
Change-Id: I6f99d275040abe6210a980fc544f7f22c3b85727
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228476
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-18 23:59:47 +00:00
Mike Klein
d864d1dc19 Revert "more JIT refactoring"
This reverts commit 558b639225.

Reason for revert: broke some perf bots

Original change's description:
> more JIT refactoring
> 
> This re-enables AVX2 JIT with simultaneous register assignment and
> instruction selection.  You can see it working in a very basic way in
> how we choose instructions and registers for Op::mad_f32.
> 
> Constants are still broadcast, here inside the loop instead of hoisted.
> I think it'll probably end up best to use constants directly from memory
> (as in vpshufb's masks), falling back to these in-loop broadcasts when
> that can't work.
> 
> Change-Id: If17d51b9960f08da3612e51ac04424e996bf83d4
> Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228366
> Commit-Queue: Mike Klein <mtklein@google.com>
> Reviewed-by: Mike Klein <mtklein@google.com>

TBR=mtklein@google.com

Change-Id: Id6cd5acd873499bb394009489d77e7636ecbc9c6
No-Presubmit: true
No-Tree-Checks: true
No-Try: true
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228462
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-18 23:21:24 +00:00
Mike Klein
558b639225 more JIT refactoring
This re-enables AVX2 JIT with simultaneous register assignment and
instruction selection.  You can see it working in a very basic way in
how we choose instructions and registers for Op::mad_f32.

Constants are still broadcast, here inside the loop instead of hoisted.
I think it'll probably end up best to use constants directly from memory
(as in vpshufb's masks), falling back to these in-loop broadcasts when
that can't work.

Change-Id: If17d51b9960f08da3612e51ac04424e996bf83d4
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228366
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2019-07-18 22:24:42 +00:00
Mike Klein
62bccdaf3c move death back into Builder::Instruction
I find myself passing around parallel vectors of Builder::Instructions
and deaths so often that it just makes more sense practically to store
them together.  It's a little awkward that the values are only useful
after calling done(), but I can live with that.

Get a little more careful about mutation, passing Builder::Instructions
by const&.   Instead of extending lifetimes of live hoisted
instructions, just check for them in maybe_recycle_register() instead.

Change-Id: I1cb9e25c1a7c46a250c2271334821be8535353bf
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228367
Reviewed-by: Mike Klein <mtklein@google.com>
Commit-Queue: Mike Klein <mtklein@google.com>
2019-07-18 19:01:44 +00:00
Mike Klein
c2fb3b4b72 split deaths() out of other analysis
I'm slowly refactoring my way to where hoisting and register assignment
are done in backend-specific ways, but this liveness analysis is always
going to be useful for each backend.

Use deaths() to restore friendly ☠️  dead code markers in test dumps.

Change-Id: I3ab94665bbbbf0788b0b27e00d644eba927dff47
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/228113
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Mike Klein <mtklein@google.com>
2019-07-17 18:11:10 +00:00
Mike Klein
9977efa703 test both JIT and interpreter
Add Program::dropJIT() to allow us to proactively drop
any JIT code forcing fallback on the interpreter,
and use it to test both on JIT-supported platforms.

Other platforms will just test the interpreter twice.

Change-Id: I607d00ef3c648e66a0b3a1374b11aa82dbfff70c
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/227424
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Herb Derby <herb@google.com>
2019-07-16 22:31:17 +00:00