This has the nice property of being able to double-check hashes after the fact.
mtklein@mtklein ~/skia (hash-png)> md5sum bad/8888/3x3bitmaprect.png
deede70ab2f34067d461fb4a93332d4c bad/8888/3x3bitmaprect.png
mtklein@mtklein ~/skia (hash-png)> grep 3x3bitmaprect_8888 bad/dm.json
"3x3bitmaprect_8888" : "deede70ab2f34067d461fb4a93332d4c",
I have checked that no two premultiplied colors map to the same unpremultiplied
color (math nerds: unpremultiplication is injective), so a change in
premultiplied SkBitmap will always imply a change in the encoded
unpremultiplied .png. This means, it's safe to hash .pngs; we won't miss
subtle changes.
BUG=skia:
R=jcgregorio@google.com, stephana@google.com, mtklein@google.com
Author: mtklein@chromium.org
Review URL: https://codereview.chromium.org/549203003
The two bugs are/were:
The old loop to draw the hoisted layers included the saveLayer call which caused double application of the layer's paint (This is the +1 change).
The hoisted layer is intended to be drawn in device coordinates. The old code was drawing it in the coordinate space of the saveLayer thus it was misplaced (This is the setMatrix change).
R=bsalomon@google.com
Author: robertphillips@google.com
Review URL: https://codereview.chromium.org/551843002
DM's striking off into its own JSON world. This gets strawman implementations
in place for writing and reading a JSON file mapping test name to hashes.
For what it's worth, I basically want to change _all_ these pieces,
- MD5 is slow and we can replace it with something faster,
- JSON schema needs room to grow more data,
- it'd be nice to hash once instead of twice when reading and writing,
- this code wants lots of refactoring,
but this gives us a starting platform to work on these bits at our leisure.
E.x. file for now:
mtklein@mtklein ~/skia (dm)> cat good/dm.json
{
"3x3bitmaprect_565" : "fc70d985fbfbe70e3a3c9dc626d4f5bc",
"3x3bitmaprect_8888" : "df1591dde35907399734ea19feb76663",
"3x3bitmaprect_gpu" : "df1591dde35907399734ea19feb76663",
"aaclip_565" : "1862798689b838a7ab0dc0652b9ace3a",
"aaclip_8888" : "47bb314329f0ce243f1d83fd583decb7",
"aaclip_gpu" : "75f72412d0ef4815770202d297246e7d",
...
BUG=skia:
R=jcgregorio@google.com, stephana@google.com, mtklein@google.com
Author: mtklein@chromium.org
Review URL: https://codereview.chromium.org/546873002
SkTaskGroup is like SkThreadPool except the threads stay in
one global pool. Each SkTaskGroup itself is tiny (4 bytes)
and its wait() method applies only to tasks add()ed to that
instance, not the whole thread pool.
This means we don't need to bring up new thread pools when
tests themselves want to use multithreading (e.g. pathops,
quilt). We just create a new SkTaskGroup and wait for that
to complete. This should be more efficient, and allow us
to expand where we use threads to really latency sensitive
places. E.g. we can probably now use these in nanobench
for CPU .skp rendering.
Now that all threads are sharing the same pool, I think we
can remove most of the custom mechanism pathops tests use
to control threading. They'll just ride on the global pool
with all other tests now.
This (temporarily?) removes the GPU multithreading feature
from DM, which we don't use.
On my desktop, DM runs a little faster (57s -> 55s) in
Debug, and a lot faster in Release (36s -> 24s). The bots
show speedups of similar proportions, cutting more than a
minute off the N4/Release and Win7/Debug runtimes.
BUG=skia:
Committed: https://skia.googlesource.com/skia/+/9c7207b5dc71dc5a96a2eb107d401133333d5b6fR=caryclark@google.com, bsalomon@google.com, bungeman@google.com, mtklein@google.com, reed@google.com
Author: mtklein@chromium.org
Review URL: https://codereview.chromium.org/531653002
Reason for revert:
Leaks, leaks, leaks.
Original issue's description:
> SkThreadPool ~~> SkTaskGroup
>
> SkTaskGroup is like SkThreadPool except the threads stay in
> one global pool. Each SkTaskGroup itself is tiny (4 bytes)
> and its wait() method applies only to tasks add()ed to that
> instance, not the whole thread pool.
>
> This means we don't need to bring up new thread pools when
> tests themselves want to use multithreading (e.g. pathops,
> quilt). We just create a new SkTaskGroup and wait for that
> to complete. This should be more efficient, and allow us
> to expand where we use threads to really latency sensitive
> places. E.g. we can probably now use these in nanobench
> for CPU .skp rendering.
>
> Now that all threads are sharing the same pool, I think we
> can remove most of the custom mechanism pathops tests use
> to control threading. They'll just ride on the global pool
> with all other tests now.
>
> This (temporarily?) removes the GPU multithreading feature
> from DM, which we don't use.
>
> On my desktop, DM runs a little faster (57s -> 55s) in
> Debug, and a lot faster in Release (36s -> 24s). The bots
> show speedups of similar proportions, cutting more than a
> minute off the N4/Release and Win7/Debug runtimes.
>
> BUG=skia:
>
> Committed: https://skia.googlesource.com/skia/+/9c7207b5dc71dc5a96a2eb107d401133333d5b6fR=caryclark@google.com, bsalomon@google.com, bungeman@google.com, reed@google.com, mtklein@chromium.orgTBR=bsalomon@google.com, bungeman@google.com, caryclark@google.com, mtklein@chromium.org, reed@google.com
NOTREECHECKS=true
NOTRY=true
BUG=skia:
Author: mtklein@google.com
Review URL: https://codereview.chromium.org/533393002
SkTaskGroup is like SkThreadPool except the threads stay in
one global pool. Each SkTaskGroup itself is tiny (4 bytes)
and its wait() method applies only to tasks add()ed to that
instance, not the whole thread pool.
This means we don't need to bring up new thread pools when
tests themselves want to use multithreading (e.g. pathops,
quilt). We just create a new SkTaskGroup and wait for that
to complete. This should be more efficient, and allow us
to expand where we use threads to really latency sensitive
places. E.g. we can probably now use these in nanobench
for CPU .skp rendering.
Now that all threads are sharing the same pool, I think we
can remove most of the custom mechanism pathops tests use
to control threading. They'll just ride on the global pool
with all other tests now.
This (temporarily?) removes the GPU multithreading feature
from DM, which we don't use.
On my desktop, DM runs a little faster (57s -> 55s) in
Debug, and a lot faster in Release (36s -> 24s). The bots
show speedups of similar proportions, cutting more than a
minute off the N4/Release and Win7/Debug runtimes.
BUG=skia:
R=caryclark@google.com, bsalomon@google.com, bungeman@google.com, mtklein@google.com, reed@google.com
Author: mtklein@chromium.org
Review URL: https://codereview.chromium.org/531653002