Pipeline parallelism organizes a parallel program as a linear sequence
of s stages. Each stage processes elements of a data stream, passing
each processed data element to the next stage, and then taking on a
new element before the subsequent stages have necessarily completed
their processing. Pipeline parallelism is used especially in
streaming applications that perform video, audio, and digital signal
processing. Three out of 13 benchmarks in PARSEC, a popular software
benchmark suite designed for shared-memory multiprocessors, can be
expressed as pipeline parallelism.

Whereas most concurrency platforms that support pipeline parallelism
use a "construct-and-run" approach, this paper investigates
"on-the-fly" pipeline parallelism, where the structure of the pipeline
emerges as the program executes rather than being specified a priori.
On-the-fly pipeline parallelism allows the number of stages to vary
from iteration to iteration and dependencies to be data dependent. We
propose simple linguistics for specifying on-the-fly pipeline
parallelism and describe a provably efficient scheduling algorithm,
the PIPER algorithm, which integrates pipeline parallelism into a
work-stealing scheduler, allowing pipeline and fork-join parallelism
to be arbitrarily nested. The PIPER algorithm automatically throttles
the parallelism, precluding "runaway" pipelines. Given a pipeline
computation with $T_1$ work and $T_\infty$ span (critical-path
length), PIPER executes the computation on $P$ processors in $T_P \le
T_1/P + O(T_\infty + \lg P)$ expected time. PIPER also limits stack
space, ensuring that it does not grow unboundedly with running time.

We have incorporated on-the-fly pipeline parallelism into a Cilk-based
work-stealing runtime system. Our prototype Cilk-P implementation
exploits optimizations such as lazy enabling and dependency folding.
We have ported the three PARSEC benchmarks that exhibit pipeline
parallelism to run on Cilk-P. One of these, x264, cannot readily be
executed by systems that support only construct-and-run pipeline
parallelism. Benchmark results indicate that Cilk-P has low serial
overhead and good scalability. On x264, for example, Cilk-P exhibits
a speedup of 13.87 over its respective serial counterpart when running
on 16 processors.

Go back to Angelina's homepage