skia2/dm/DMTaskRunner.cpp
commit-bot@chromium.org 3f032156c8 DM: Push GPU-parent child tasks to the front of the queue.
Like yesterday's change to run CPU-parent child tasks serially in thread, this
reduces peak memory usage by improving the temporaly locality of the bitmaps we
create.

E.g. Let's say we start with tasks A B C and D
    Queue: [ A B C D ]
Running A creates A' and A", which depend on a bitmap created by A.
    Queue: [ B C D A' A" * ]
That bitmap now needs sit around in RAM while B C and D run pointlessly and can
only be destroyed at *.  If instead we do this and push dependent child tasks
to the front of the queue, the queue and bitmap lifetime looks like this:
    Queue: [ A' A" * B C D ]

This is much, much worse in practice because the queue is often several thousand
tasks long.  100s of megs of bitmaps can pile up for 10s of seconds pointlessly.

To make this work we add addNext() to SkThreadPool and its cousin DMTaskRunner.
I also took the opportunity to swap head and tail in the threadpool
implementation so it matches the comments and intuition better: we always pop
the head, add() puts it at the tail, addNext() at the head.


Before
  Debug:   49s, 1403352k peak
  Release: 16s, 2064008k peak

After
  Debug:   49s, 1234788k peak
  Release: 15s, 1903424k peak

BUG=skia:2478
R=bsalomon@google.com, borenet@google.com, mtklein@google.com

Author: mtklein@chromium.org

Review URL: https://codereview.chromium.org/263803003

git-svn-id: http://skia.googlecode.com/svn/trunk@14506 2bbb7eff-a529-9590-31e7-b0007b416f81
2014-05-01 17:41:32 +00:00

22 lines
734 B
C++

#include "DMTaskRunner.h"
#include "DMTask.h"
namespace DM {
TaskRunner::TaskRunner(int cpuThreads, int gpuThreads) : fCpu(cpuThreads), fGpu(gpuThreads) {}
void TaskRunner::add(CpuTask* task) { fCpu.add(task); }
void TaskRunner::addNext(CpuTask* task) { fCpu.addNext(task); }
void TaskRunner::add(GpuTask* task) { fGpu.add(task); }
void TaskRunner::wait() {
// These wait calls block until each threadpool is done. We don't allow
// spawning new child GPU tasks, so we can wait for that first knowing
// we'll never try to add to it later. Same can't be said of the CPU pool:
// both CPU and GPU tasks can spawn off new CPU work, so we wait for that last.
fGpu.wait();
fCpu.wait();
}
} // namespace DM