I'm thinking of using this in perf with something like:
ratio(fill(filter("test=foo")), fill(filter("test=control")))
Does that make sense to you?
Not sure that this is really a good control bench on all bots,
but I propose we just run it a bit and find out if it needs work.
BUG=skia:
Review URL: https://codereview.chromium.org/1129823003
The benches for N <= 10 get around 2x faster on my N7 and N9. I believe this
is because of the reduced function-call-then-function-pointer-call overhead on
the N7, and additionally because it seems autovectorization beats our NEON code
for small N on the N9.
My desktop is unchanged, though that's probably because N=10 lies well within a
region where memset's performance is essentially constant: N=100 takes only
about 2x as long as N=1 and N=10, which perform nearly identically.
BUG=skia:
Review URL: https://codereview.chromium.org/1073863002
The colorfilter is applied to a single (paint's) color, so the bench does not
measure the filter at all, but simply the blit of a color.
BUG=skia:
TBR=
Review URL: https://codereview.chromium.org/1055383002
Now that all SkCodecs can rewind (assuming the stream is rewindable),
we do not need to special case it.
Pointed out by Derek in the code review that added this.
TBR=djsollen
Review URL: https://codereview.chromium.org/1058633002
CodecBench:
Add new class for timing using SkCodec.
DecodingBench:
Include creating a decoder inside the loop. This is to have a better
comparison against SkCodec. SkCodec's factory function does not
necessarily read the same amount as SkImageDecoder's, so in order to
have a meaningful comparison, read the entire stream from the
beginning. Also for comparison, create a new SkStream from the
SkData each time.
Add a debugging check to make sure we have an SkImageDecoder.
Add include guards.
nanobench.cpp:
Decode using SkCodec.
When decoding using SkImageDecoder, exclude benches where we decoded
to a different color type than requested. SkImageDecoder may decide to
decode to a different type, in which case the name is misleading.
TODOs:
Now that we ignore color types that do not match the desired
color type, we should add Index8. This also means calling the more
complex version of getPixels so CodecBench can support kIndex8.
BUG=skia:3257
Review URL: https://codereview.chromium.org/1044363002
The primary feature this delivers is SkNf and SkNd for arbitrary power-of-two N. Non-specialized types or types larger than 128 bits should now Just Work (and we can drop in a specialization to make them faster). Sk4s is now just a typedef for SkNf<4, SkScalar>; Sk4d is SkNf<4, double>, Sk2f SkNf<2, float>, etc.
This also makes implementing new specializations easier and more encapsulated. We're now using template specialization, which means the specialized versions don't have to leak out so much from SkNx_sse.h and SkNx_neon.h.
This design leaves us room to grow up, e.g to SkNf<8, SkScalar> == Sk8s, and to grown down too, to things like SkNi<8, uint16_t> == Sk8h.
To simplify things, I've stripped away most APIs (swizzles, casts, reinterpret_casts) that no one's using yet. I will happily add them back if they seem useful.
You shouldn't feel bad about using any of the typedef Sk4s, Sk4f, Sk4d, Sk2s, Sk2f, Sk2d, Sk4i, etc. Here's how you should feel:
- Sk4f, Sk4s, Sk2d: feel awesome
- Sk2f, Sk2s, Sk4d: feel pretty good
No public API changes.
TBR=reed@google.com
BUG=skia:3592
Review URL: https://codereview.chromium.org/1048593002
Need to land SK_SUPPORT_LEGACY_SCALAR_MAPPOINTS in chrome to suppress Affine
version which causes slight differences (which will need to be rebaselined)
BUG=skia:
Review URL: https://codereview.chromium.org/1045493002
Duplicate code from the HWUI backends for DM and nanobench
moves into a single place, saving a hundred lines or more of
cut-and-paste.
There's some indication that this increases the incidence of
SkCanvas "Unable to find device for layer." warnings, but no
clear degradation in test results.
R=djsollen@google.com,mtklein@google.com
BUG=skia:3589
Review URL: https://codereview.chromium.org/1036303002
Add and test trunc(), which is what get() used to be before rounding.
Using trunc() is a ~40% speedup on our linear gradient bench.
#neon #floats
BUG=skia:3592
#n5
#n9
CQ_INCLUDE_TRYBOTS=client.skia.android:Test-Android-Nexus5-Adreno330-Arm7-Debug-Trybot;client.skia.android:Test-Android-Nexus9-TegraK1-Arm64-Release-Trybot
Review URL: https://codereview.chromium.org/1032243002
Am I going nuts or can we get this down to just adds and converts in the loop?
#floats #n9
BUG=skia:3592
CQ_INCLUDE_TRYBOTS=client.skia.android:Test-Android-Nexus9-TegraK1-Arm64-Release-Trybot
Review URL: https://codereview.chromium.org/1008973004
There is no reason to require the 4 SkPMFloats (registers) to be adjacent.
The only potential win in loads and stores comes from the SkPMColors being adjacent.
Makes no difference to existing bench.
BUG=skia:
Review URL: https://codereview.chromium.org/1035583002
RotatedRectBench was asking for its base layer size, which may
not be what it expects with odd canvas modes (particularly proxies).
Most benchmarks are not so sophisticated; they hard-wire their
size and just use that (expected) value.
R=mtklein@google.com,djsollen@google.com
BUG=skia:3566
Review URL: https://codereview.chromium.org/1015013004
Initial experiments did show that the 256 tile size fixed the hd2000 win7
nanobot failures. However it did not have any effect on other bots, so this
change is to move back to the larger tile size on all bots expect for the
hd2000.
BUG=skia:
Review URL: https://codereview.chromium.org/1022083002
Going back to old nanobench tile size to see if the increase to tile is what has been
causing recent nanobench crashes. The crashes seem very nondeterministic and hard to
debug manually.
256x256 is too small of a tile to give accurate gpu results but if this fixes we can try some compromise in the middle
BUG=skia:
Review URL: https://codereview.chromium.org/1022823003