skia2/tools/perf-canvaskit-puppeteer
Kevin Lubick f611404816 [canvaskit] Remove the need for users to keep track of contexts.
We'll switch to the correct context when necessary (e.g. before
calls that talk to the GPU). This is achieved by adding in
calls at the JS layer to switch the context before making a call
that is known to talk to the GPU (e.g. draw calls on SkCanvas).

Another implementation that was considered was to add a C++
shim in GrGLInterface that would switch the context before
every call in the GPU - however, that seemed too difficult
and would add extra overhead if a single draw* call talks
to the GPU multiple times.

Bug: skia:12255
Change-Id: I96e4c6b41a5bfcc9913aeaca7ccb125358048ad3
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/432136
Reviewed-by: Brian Salomon <bsalomon@google.com>
2021-08-25 18:23:06 +00:00
..
canvas_perf_assets [canvaskit] Add perf tests for different font formats 2021-06-22 12:09:50 +00:00
path_translate_assets Add puppeteer perf for performance degredation related to transforming a complex path. Translations 2020-05-29 20:17:14 +00:00
benchmark.js [canvaskit] Remove the need for users to keep track of contexts. 2021-08-25 18:23:06 +00:00
canvas_perf.html [canvaskit] Add perf tests for different font formats 2021-06-22 12:09:50 +00:00
canvas_perf.js [canvaskit] Fix font.getGlyphID test 2021-06-24 18:27:49 +00:00
Makefile [skottie] Update lotties_with_assets to include all lotties from lottie-samples 2021-06-28 14:59:24 +00:00
package-lock.json Bump ws from 7.3.0 to 7.4.6 in /tools/perf-canvaskit-puppeteer 2021-06-03 16:21:18 +00:00
package.json Add puppeteer perf for performance degredation related to transforming a complex path. Translations 2020-05-29 20:17:14 +00:00
path-transform.html [canvaskit] Remove Sk from nearly all function/type names. 2020-10-07 21:01:32 +00:00
perf_all_skps.sh Add Puppeteer perf for release and experimental_simd builds of CanvasKit against SKPs 2020-07-16 17:47:08 +00:00
perf-canvaskit-with-puppeteer.js allow external files in canvas_perf.js tests 2020-08-06 20:27:32 +00:00
README.md [canvaskit] Document perf-canvaskit-puppeteer 2021-06-28 14:59:24 +00:00
render-skp.html [canvaskit] Update jsfiddle links and fix renderSKP 2020-10-08 12:51:15 +00:00
skottie-frames.html [skottie] Update lotties_with_assets to include all lotties from lottie-samples 2021-06-28 14:59:24 +00:00
skp_data_prep.js Add Puppeteer perf for release and experimental_simd builds of CanvasKit against SKPs 2020-07-16 17:47:08 +00:00

Measuring the performance of CanvasKit using Puppeteer and Chrome.

Initial setup

Run npm ci to install the dependencies need to run the tests. In //modules/canvaskit, run make release to build the canvaskit that will be used. With modifications to the Makefile, other builds (e.g. make profile) can be used as well.

If needed, one can download the lottie-samples and/or skp assets from CIPD using the sk tool:

sk asset download lottie-samples ~/Downloads/lottie-samples
sk asset download skps ~/Downloads/skps

The actual location that these assets can be downloaded to is not important - the Makefile assumes them to be in Downloads, but that can be modified by the local user.

Basic Performance Tests

We have a harness for running benchmarks. Benchmark code snippets can be added to canvas_perf.js. The harness itself is the canvas_perf.html and benchmark.js. It will run the "test" portion of the code on multiple frames and gather data.

To run the benchmarks, run make perf_js. By default, this will use the most recent release build of canvaskit done locally. If you want to only run one or a few, modify the canvas_perf.js file by changing the relevent tests.push to onlytests.push and then run make perf_js.

On the CI, the results from these tests are uploaded to Perf. For example: https://perf.skia.org/e/?queries=test%3Dcanvas_drawOval We include metrics such as the 90th, 95th, and 99th percentile frame, average frame time, median frame time, and standard deviation. There are three types of measurements: without_flush_ms is the measurement of the test() function; with_flush_ms is the measurement of test() and the subsequent flush() call; total_frame_ms is the frame-to-frame time. Frame-to-frame is important to measure because it accounts for any work the GPU needs to do, even after CanvasKit flushes.

Skottie Frames Performance

There is a harness that gathers data about rendering 600 frames of a skottie animation, cycling through it in a similar fashion to how it would be displayed to a user (e.g. as it is on skottie.skia.org).

To test it locally with a specific skottie animation, feel free to modify the Makefile to adjust the input_lottie argument and then run make frames. The harness itself is skottie-frames.html and benchmark.js.

On the CI, the results from these tests are uploaded to Perf. For example: https://perf.skia.org/e/?queries=test%3Dlego_loader We include metrics such as the first 5 frame times, average frame times, 90th, 95th and 99th percentile frame time.

SKP Performance

There is a harness that repeatedly will draw an SKP and measure various metrics. This is handled by skottie-frames.html and benchmark.js. As before, feel free to modify the Makefile (the input_skp argument) and run make skp.

On the CI, the results from these tests are uploaded to Perf. For example: https://perf.skia.org/e/?queries=binary%3DCanvasKit%26test%3Ddesk_chalkboard.skp