Changes verbose mode to print both the table and the individual sample
values. No need to hold back information in verbose mode.
BUG=skia:
Review URL: https://codereview.chromium.org/1208763003
Adds a nanobench mode that takes samples for a fixed amount of time,
rather than taking a fixed amount of samples.
BUG=skia:
Review URL: https://codereview.chromium.org/1204153002
Now that Sk4px exists, there's a lot less sense in eeking out every
cycle of speed from SkPMFloat: if we need to go _really_ fast, we
should use Sk4px. SkPMFloat's going to be used for things that are
already slow: large-range intermediates, divides, sqrts, etc.
A [0,1] range is easier to work with, and can even be faster if we
eliminate enough *255 and *1/255 steps. This is particularly true
on ARM, where NEON can do the *255 and /255 steps for us while
converting float<->int.
We have lots of experimental SkPMFloat <-> SkPMColor APIs that
I'm now removing. Of the existing APIs, roundClamp() is the sanest,
so I've kept only that, now called round(). The 4-at-a-time APIs
never panned out, so they're gone.
There will be small diffs on:
colormatrix coloremoji colorfilterimagefilter fadefilter imagefilters_xfermodes imagefilterscropexpand imagefiltersgraph tileimagefilter
BUG=skia:
Review URL: https://codereview.chromium.org/1201343004
Improves the GPU measuring accuracy of nanobench by using fence syncs.
Fence syncs are very widely supported and available on almost every
platform.
NO_MERGE_BUILDS
BUG=skia:
Review URL: https://codereview.chromium.org/1194783003
I think these changes to the subset benchmarks cover what we discussed yesterday.
I removed the divisor benchmarks (2x2, 3x3) and changed the single subset benchmarks.
Also, we will no longer benchmark subset decodes on small images.
BUG=skia:
Review URL: https://codereview.chromium.org/1188223002
Let's make CPU-bound .SKP benching mimic Chrome's tiles.
Unfortunately, the CPU code also performs a lot better with those big wide tiles...
BUG=skia:
Review URL: https://codereview.chromium.org/1189863002
This makes it easier to benchmark _mpd variants in a profiler.
E.g.,
<profiler> out/Release/nanobench --images --config 8888 --loops -1 --match sp_desk_nytimes
BUG=skia:
Review URL: https://codereview.chromium.org/1184673006
I haven't figured out a pithy way to have these apply to only classes
originating from SkNx, so let's just remove them. There aren't too
many use cases, and it's not really any less readable without them.
Semantically, this is a no-op.
BUG=skia:
Review URL: https://codereview.chromium.org/1167153002
The existing bench only tests the fast path, but we're looking to speed
up the general case. It'd be nice to be able to measure that speedup.
BUG=skia:
Review URL: https://codereview.chromium.org/1146953003
Constructing the gm tests and benches causes many calls to font loads.
This is visible as profiling samples in fontconfig and freetype on Linux
for all profiling runs of nanobench. This complicates analysis of
test-cases that are suspected of being slow due to font-related issues.
Move the font loading to GM::onOnceBeforeDraw and Benchmark::onPreDraw.
This way the code is not executed if the testcase does not match the
nanobench --match filter. This way the samples in font-related code are
more easy to identify as legitimate occurances caused by the testcase.
This should not cause differences in timings, because:
* Benchmark::preDraw / onPreDraw is defined to be run outside the timer
* GM::runAsBench is not enabled for any of the modified testcases. Also
nanobench untimed warmup round should run the onOnceBeforeDraw.
(and there are other GM::runAsBench gms already doing loading in
onOnceBeforeDraw).
Changes the behavior:
In TextBench:
Before, the test would report two different gms with the same name if
the color emoji font was not loaded successfully.
After, the test always reports all tests as individual names.
Generally:
The errors from loading fonts now print inbetween each testcase, as
opposed to printing during construction phase. Sample output:
( 143/145 MB 1872) 14.7ms 8888 gm quadclosepathResource /fonts/Funkster.ttf not a valid font.
( 160/160 MB 1831) 575µs 8888 gm surfacenewResource /fonts/Funkster.ttf not a valid font.
( 163/165 MB 1816) 12.5ms 8888 gm linepathResource /fonts/Funkster.ttf not a valid font.
( 263/411 MB 1493) 118ms 8888 gm typefacestyles_kerningResource /fonts/Funkster.ttf not a valid font.
( 374/411 MB 1231) 7.16ms 565 gm getpostextpathResource /fonts/Funkster.ttf not a valid font.
( 323/411 MB 1179) 4.92ms 565 gm stringartResource /fonts/Funkster.ttf not a valid font.
( 347/493 MB 917) 191ms 565 gm patch_gridResource /fonts/Funkster.ttf not a valid font.
( 375/493 MB 857) 23.9ms gpu gm clipdrawdrawCannot render path (0)
( 393/493 MB 706) 2.91ms unit test ParsePath------ png error IEND: CRC error
( 394/493 MB 584) 166ms gpu gm hairmodesResource /fonts/Funkster.ttf not a valid font.
Resource /fonts/Funkster.ttf not a valid font.
Resource /fonts/Funkster.ttf not a valid font.
...
Review URL: https://codereview.chromium.org/1144023002
Make GrResourceCache performance less sensitive to key length change.
The memcmp in GrResourceKey is called when SkTDynamicHash jumps the
slots to find the hash by a index. Avoid most of the memcmps by
comparing the hash first.
This is important because small changes in key data length can cause
big performance regressions. The theory is that key length change causes
different hash values. These hash values might trigger memcmps that
originally weren't there, causing the regression.
Adds few specialized benches to grresourcecache_add to test different
key lengths. The tests are run only on release, because on debug the
SkTDynamicHash validation takes too long, and adding many such delays
to development test runs would be unproductive. On release the tests
are quite fast.
Effect of this patch to the added tests on amd64:
grresourcecache_find_10 738us -> 768us 1.04x
grresourcecache_find_2 472us -> 476us 1.01x
grresourcecache_find_25 841us -> 845us 1x
grresourcecache_find_4 565us -> 531us 0.94x
grresourcecache_find_54 1.18ms -> 1.1ms 0.93x
grresourcecache_find_5 834us -> 749us 0.9x
grresourcecache_find_3 620us -> 542us 0.87x
grresourcecache_add_25 2.74ms -> 2.24ms 0.82x
grresourcecache_add_56 3.23ms -> 2.56ms 0.79x
grresourcecache_add_54 3.34ms -> 2.62ms 0.78x
grresourcecache_add_5 2.68ms -> 2.1ms 0.78x
grresourcecache_add_10 2.7ms -> 2.11ms 0.78x
grresourcecache_add_2 1.85ms -> 1.41ms 0.76x
grresourcecache_add 1.84ms -> 1.4ms 0.76x
grresourcecache_add_4 1.99ms -> 1.49ms 0.75x
grresourcecache_add_3 2.11ms -> 1.55ms 0.73x
grresourcecache_add_55 39ms -> 13.9ms 0.36x
grresourcecache_find_55 23.2ms -> 6.21ms 0.27x
On arm64 the results are similar.
On arm_v7_neon, the results lack the discontinuity at 55:
grresourcecache_add 4.06ms -> 4.26ms 1.05x
grresourcecache_add_2 4.05ms -> 4.23ms 1.05x
grresourcecache_find 1.28ms -> 1.3ms 1.02x
grresourcecache_find_56 3.35ms -> 3.32ms 0.99x
grresourcecache_find_2 1.31ms -> 1.29ms 0.99x
grresourcecache_find_54 3.28ms -> 3.24ms 0.99x
grresourcecache_add_5 6.38ms -> 6.26ms 0.98x
grresourcecache_add_55 8.44ms -> 8.24ms 0.98x
grresourcecache_add_25 7.03ms -> 6.86ms 0.98x
grresourcecache_find_25 2.7ms -> 2.59ms 0.96x
grresourcecache_find_4 1.45ms -> 1.38ms 0.95x
grresourcecache_find_10 2.52ms -> 2.39ms 0.95x
grresourcecache_find_55 3.54ms -> 3.33ms 0.94x
grresourcecache_find_5 2.5ms -> 2.32ms 0.93x
grresourcecache_find_3 1.57ms -> 1.43ms 0.91x
The extremely slow case, 55, is postulated to be due to the index jump
collisions running the memcmp. This is not visible on arm_v7_neon probably due
to hash function producing different results for 32 bit architectures.
This change is needed for extending path cache key in Gr
NV_path_rendering codepath. Extending is needed in order to add dashed
paths to the path cache.
Review URL: https://codereview.chromium.org/1132723003
I'm thinking of using this in perf with something like:
ratio(fill(filter("test=foo")), fill(filter("test=control")))
Does that make sense to you?
Not sure that this is really a good control bench on all bots,
but I propose we just run it a bit and find out if it needs work.
BUG=skia:
Review URL: https://codereview.chromium.org/1129823003
The benches for N <= 10 get around 2x faster on my N7 and N9. I believe this
is because of the reduced function-call-then-function-pointer-call overhead on
the N7, and additionally because it seems autovectorization beats our NEON code
for small N on the N9.
My desktop is unchanged, though that's probably because N=10 lies well within a
region where memset's performance is essentially constant: N=100 takes only
about 2x as long as N=1 and N=10, which perform nearly identically.
BUG=skia:
Review URL: https://codereview.chromium.org/1073863002
The colorfilter is applied to a single (paint's) color, so the bench does not
measure the filter at all, but simply the blit of a color.
BUG=skia:
TBR=
Review URL: https://codereview.chromium.org/1055383002