ed58e004e0
The client can do a test run of their application with a persistent cache set to SkSL mode. They store the key and data blobs that are produced. Ship those blobs with the application. At startup, call GrContext::precompileShader for each key/data pair. This compiles the shaders, and stores the GL program ID, plus a small amount of metadata in our runtime program cache. Caveats: * Currently only implemented for the GL backend. Other backends will require more metadata to do any useful amount of work. Metal may need a more drastic workflow change, involving offline compilation of the shaders. * Currently only implemented for cached SkSL (not GLSL or program binaries). Supporting other formats again requires more metadata, and the cached shaders become increasingly specialized to GPU and driver versions. * Reusing the cached SkSL on different hardware is not supported. Many driver workarounds are implemented in the SkSL -> GLSL transformation, but some are higher level. Limiting device variance by artificially hiding extensions may help, but there are no guarantees. * The 'gltestprecompile' DM config exercises this code similarly to 'gltestpersistentcache', ensuring that results are visually identical when precompiling, and that no cache misses occur after precompiling. Change-Id: Id314c5d5f5a58fe503a0505a613bd4a540cc3589 Reviewed-on: https://skia-review.googlesource.com/c/skia/+/239438 Reviewed-by: Greg Daniel <egdaniel@google.com> Reviewed-by: Brian Salomon <bsalomon@google.com> Commit-Queue: Brian Osman <brianosman@google.com> |
||
---|---|---|
.. | ||
android_compile.expected | ||
calmbench.expected | ||
check_generated_files.expected | ||
compile.expected | ||
compute_buildstats.expected | ||
compute_test.expected | ||
g3_compile.expected | ||
housekeeper.expected | ||
infra.expected | ||
perf_canvaskit.expected | ||
perf_pathkit.expected | ||
perf_skottietrace.expected | ||
perf_skottiewasm_lottieweb.expected | ||
perf.expected | ||
recreate_skps.expected | ||
skpbench.expected | ||
skqp_test.expected | ||
sync_and_compile.expected | ||
test_canvaskit.expected | ||
test_lottie_web.expected | ||
test_pathkit.expected | ||
test_skqp_emulator.expected | ||
test.expected | ||
upload_buildstats_results.expected | ||
upload_calmbench_results.expected | ||
upload_dm_results.expected | ||
upload_nano_results.expected | ||
upload_skiaserve.expected | ||
android_compile.py | ||
calmbench.py | ||
check_generated_files.py | ||
compile.py | ||
compute_buildstats.py | ||
compute_test.py | ||
g3_compile.py | ||
housekeeper.py | ||
infra.py | ||
perf_canvaskit.py | ||
perf_pathkit.py | ||
perf_skottietrace.py | ||
perf_skottiewasm_lottieweb.py | ||
perf.py | ||
README.md | ||
recreate_skps.py | ||
skpbench.py | ||
skqp_test.py | ||
sync_and_compile.py | ||
test_canvaskit.py | ||
test_lottie_web.py | ||
test_pathkit.py | ||
test_skqp_emulator.py | ||
test.py | ||
upload_buildstats_results.py | ||
upload_calmbench_results.py | ||
upload_dm_results.py | ||
upload_nano_results.py | ||
upload_skiaserve.py |
Skia Recipes
These are the top-level scripts which run inside of Swarming tasks to perform all of Skia's automated testing.
To run a recipe locally:
$ python infra/bots/recipes.py run --workdir=/tmp/<workdir> <recipe name without .py> key1=value1 key2=value2 ...
Each recipe may have its own required properties which must be entered as key/value pairs in the command.
When you change a recipe, you generally need to re-train the simulation test:
$ python infra/bots/recipes.py test train
Or:
$ cd infra/bots; make train
The test generates expectations files for the tests contained within each recipe which illustrate which steps would run, given a particular set of inputs. Pay attention to the diffs in these files when making changes to ensure that your change has the intended effect.