Current strategy: everything from the top
Things to look at first are the manual changes:
- added tools/rewrite_includes.py
- removed -Idirectives from BUILD.gn
- various compile.sh simplifications
- tweak tools/embed_resources.py
- update gn/find_headers.py to write paths from the top
- update gn/gn_to_bp.py SkUserConfig.h layout
so that #include "include/config/SkUserConfig.h" always
gets the header we want.
No-Presubmit: true
Change-Id: I73a4b181654e0e38d229bc456c0d0854bae3363e
Reviewed-on: https://skia-review.googlesource.com/c/skia/+/209706
Commit-Queue: Mike Klein <mtklein@google.com>
Reviewed-by: Hal Canary <halcanary@google.com>
Reviewed-by: Brian Osman <brianosman@google.com>
Reviewed-by: Florin Malita <fmalita@chromium.org>
The API and implementation are very much simplified.
You may not want to bother reading the diff.
As is our trend, SkOnce now uses <atomic> directly.
Member initialization means we don't need SK_DECLARE_STATIC_ONCE.
SkSpinlock already works this same way.
All uses of the old API taking an external bool* and Lock* were pessimal,
so I have not carried this sort of API forward. It's simpler, faster,
and more space-efficient to always use this single SkOnce class interface.
SkOnce weighs 2 bytes: a done bool and an SkSpinlock, also a bool internally.
This API refactoring opens up the opportunity to fuse those into a single
three-state byte if we'd like.
No public API changes.
TBR=reed@google.com
BUG=skia:
GOLD_TRYBOT_URL= https://gold.skia.org/search2?unt=true&query=source_type%3Dgm&master=false&issue=1894893002
Review URL: https://codereview.chromium.org/1894893002
This should be a drop-in replacement for most for-loops to make them run in parallel:
for (int i = 0; i < N; i++) { code... }
~~~>
sk_parallel_for(N, [&](int i) { code... });
This is just syntax sugar over SkTaskGroup to make this use case really easy to write.
There's no more overhead that we weren't already forced to add using an interface like batch(),
and no extra heap allocations.
I've replaced 3 uses of SkTaskGroup with sk_parallel_for:
1) My unit tests for SkOnce.
2) Cary's path fuzzer.
3) SkMultiPictureDraw.
Performance should be the same. Please compare left and right for readability. :)
BUG=skia:
No public API changes.
TBR=reed@google.com
Review URL: https://codereview.chromium.org/1184373003
This fixes every case where virtual and SK_OVERRIDE were on the same line,
which should be the bulk of cases. We'll have to manually clean up the rest
over time unless I level up in regexes.
for f in (find . -type f); perl -p -i -e 's/virtual (.*)SK_OVERRIDE/\1SK_OVERRIDE/g' $f; end
BUG=skia:
Review URL: https://codereview.chromium.org/806653007
SkTaskGroup is like SkThreadPool except the threads stay in
one global pool. Each SkTaskGroup itself is tiny (4 bytes)
and its wait() method applies only to tasks add()ed to that
instance, not the whole thread pool.
This means we don't need to bring up new thread pools when
tests themselves want to use multithreading (e.g. pathops,
quilt). We just create a new SkTaskGroup and wait for that
to complete. This should be more efficient, and allow us
to expand where we use threads to really latency sensitive
places. E.g. we can probably now use these in nanobench
for CPU .skp rendering.
Now that all threads are sharing the same pool, I think we
can remove most of the custom mechanism pathops tests use
to control threading. They'll just ride on the global pool
with all other tests now.
This (temporarily?) removes the GPU multithreading feature
from DM, which we don't use.
On my desktop, DM runs a little faster (57s -> 55s) in
Debug, and a lot faster in Release (36s -> 24s). The bots
show speedups of similar proportions, cutting more than a
minute off the N4/Release and Win7/Debug runtimes.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/9c7207b5dc71dc5a96a2eb107d401133333d5b6fR=caryclark@google.com, bsalomon@google.com, bungeman@google.com, mtklein@google.com, reed@google.com
Author: mtklein@chromium.org
Review URL: https://codereview.chromium.org/531653002
Reason for revert:
Leaks, leaks, leaks.
Original issue's description:
> SkThreadPool ~~> SkTaskGroup
>
> SkTaskGroup is like SkThreadPool except the threads stay in
> one global pool. Each SkTaskGroup itself is tiny (4 bytes)
> and its wait() method applies only to tasks add()ed to that
> instance, not the whole thread pool.
>
> This means we don't need to bring up new thread pools when
> tests themselves want to use multithreading (e.g. pathops,
> quilt). We just create a new SkTaskGroup and wait for that
> to complete. This should be more efficient, and allow us
> to expand where we use threads to really latency sensitive
> places. E.g. we can probably now use these in nanobench
> for CPU .skp rendering.
>
> Now that all threads are sharing the same pool, I think we
> can remove most of the custom mechanism pathops tests use
> to control threading. They'll just ride on the global pool
> with all other tests now.
>
> This (temporarily?) removes the GPU multithreading feature
> from DM, which we don't use.
>
> On my desktop, DM runs a little faster (57s -> 55s) in
> Debug, and a lot faster in Release (36s -> 24s). The bots
> show speedups of similar proportions, cutting more than a
> minute off the N4/Release and Win7/Debug runtimes.
>
> BUG=skia:
>
> Committed: https://skia.googlesource.com/skia/+/9c7207b5dc71dc5a96a2eb107d401133333d5b6fR=caryclark@google.com, bsalomon@google.com, bungeman@google.com, reed@google.com, mtklein@chromium.orgTBR=bsalomon@google.com, bungeman@google.com, caryclark@google.com, mtklein@chromium.org, reed@google.com
NOTREECHECKS=true
NOTRY=true
BUG=skia:
Author: mtklein@google.com
Review URL: https://codereview.chromium.org/533393002
SkTaskGroup is like SkThreadPool except the threads stay in
one global pool. Each SkTaskGroup itself is tiny (4 bytes)
and its wait() method applies only to tasks add()ed to that
instance, not the whole thread pool.
This means we don't need to bring up new thread pools when
tests themselves want to use multithreading (e.g. pathops,
quilt). We just create a new SkTaskGroup and wait for that
to complete. This should be more efficient, and allow us
to expand where we use threads to really latency sensitive
places. E.g. we can probably now use these in nanobench
for CPU .skp rendering.
Now that all threads are sharing the same pool, I think we
can remove most of the custom mechanism pathops tests use
to control threading. They'll just ride on the global pool
with all other tests now.
This (temporarily?) removes the GPU multithreading feature
from DM, which we don't use.
On my desktop, DM runs a little faster (57s -> 55s) in
Debug, and a lot faster in Release (36s -> 24s). The bots
show speedups of similar proportions, cutting more than a
minute off the N4/Release and Win7/Debug runtimes.
BUG=skia:
R=caryclark@google.com, bsalomon@google.com, bungeman@google.com, mtklein@google.com, reed@google.com
Author: mtklein@chromium.org
Review URL: https://codereview.chromium.org/531653002
1 Remove atExit feature: clients can do it just as well as SkOnce can.
2 Remove support for functors: no one but the unit test did that.
3 Remove support for unused non-static SkOnceFlag (no SK_ONCE_INIT).
4 Add SkOnce variants for no-arg functions so we're not forced to pass dummy values all the time.
5 Merge SkSpinlock and SkOnceFlag, making all members private.
6 More notes about memory barriers, adding an acquire load after acquiring the spinlock.
BUG=skia:
R=bungeman@google.com, mtklein@google.com, reed@google.com
Author: mtklein@chromium.org
Review URL: https://codereview.chromium.org/302083003