Don't use SkSL node pools for runtime effects

Particlarly with runtime FPs, this is a poor space/time tradeoff.
This doesn't completely undo the memory losses from the linked bugs.
Some of it is the unavoidable cost of initializing the runtime effect
compiler - but that's going to happen once, sooner or later. It does
reduce the heap impact of each effect from ~64k to something like 2-3k,
depending on the effect.

Bug: chromium:1223995 chromium:1223996
Change-Id: I929b7a94a88119a155ca403793bebd003a2deca6
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/422518
Reviewed-by: Brian Salomon <bsalomon@google.com>
Reviewed-by: Ethan Nicholas <ethannicholas@google.com>
This commit is contained in:
Brian Osman 2021-06-28 13:37:34 -04:00
parent 661abd0f8d
commit b7af4875cb

View File

@ -78,6 +78,14 @@ private:
// Don't inline if it would require a do loop, some devices don't support them.
fCaps->fCanUseDoLoops = false;
// SkSL created by the GPU backend is typically parsed, converted to a backend format,
// and the IR is immediately discarded. In that situation, it makes sense to use node
// pools to accelerate the IR allocations. Here, SkRuntimeEffect instances are often
// long-lived (especially those created internally for runtime FPs). In this situation,
// we're willing to pay for a slightly longer compile so that we don't waste huge
// amounts of memory.
fCaps->fUseNodePools = false;
fCompiler = new SkSL::Compiler(fCaps.get());
}