Krzysztof Onak (IBM TJ Watson Research Center)
Round Compression for Parallel Matching Algorithms
Abstract: The last decade has witnessed the success of massive
parallel systems such as MapReduce, Hadoop, Dryad, or Spark. Compared
to the classic models of parallel and distributed computation, these
systems allow for more local computation on a larger amount of local
data in each parallel round. The fundamental question that arises in
this context is: can we leverage this additional power to obtain
algorithms that use fewer parallel rounds?
In this context, we consider one of the most classic graph problems:
maximum matching. It has been known that if the amount of memory per
machine is at least n^(1+Omega(1)), where n is the number of vertices,
then a constant number of rounds suffices to compute a 2-approximation
via a maximal matching. If the memory is near-linear or sublinear in
n, nothing substantially better than simulating the classic PRAM or
distributed algorithms for maximal matching has been known. For memory
linear (or even slightly sublinear) in n, we show the first algorithm
that runs in poly(log log n) rounds and computes a near-maximal
matching.
In order to establish the result, we introduce a technique of round
compression. It allows us to execute a superconstant number of rounds
of a local distributed algorithm in a constant number of rounds of a
massive parallel system. Our result and techniques have already
inspired follow-up research.
Joint work with Artur Czumaj, Jakub Łącki, Aleksander Mądry, Slobodan
Mitrović, and Piotr Sankowski.