@article{BGHKX20,
Author = { Mohammad Bavarian and
Badih Ghazi and
Elad Haramaty and
Pritish Kamath and
Ronald L. Rivest and
Madhu Sudan },
title = { Optimality of Correlated Sampling Strategies },
date = { 2020-11-09 },
OPTyear = { 2020 },
OPTmonth = { November 9, },
url = { https://theoryofcomputing.org/articles/v016a012/ },
journal = { Theory of Computing },
volume = { 16 },
pages = { 1--12 },
abstract = {
In the correlated sampling problem, two players are given probability distributions P and Q,
respectively, over the same finite set, with access to shared randomness.
Without any communication, the two players are each required to output an element
sampled according to their respective distributions, while trying to minimize the
probability that their outputs disagree. A well known strategy due to
Kleinberg--Tardos and Holenstein, with a close variant (for a similar problem)
due to Broder, solves this task with disagreement probability at most
2δ/(1+δ), where δ
is the total variation distance between P and Q.
This strategy has been used in several different contexts, including sketching
algorithms, approximation algorithms based on rounding linear programming relaxations,
the study of parallel repetition and cryptography.
\par
In this paper, we give a surprisingly simple proof that this strategy is essentially optimal.
Specifically, for every δ∈(0,1), we show that any correlated sampling strategy incurs a
disagreement probability of essentially 2δ/(1+δ) on some inputs P and Q with total variation
distance at most δ. This partially answers a recent question of Rivest.
\par
Our proof is based on studying a new problem that we call constrained agreement.
Here, the two players are given subsets A⊆[n] and B⊆[n], respectively, and their
goal is to output an element i∈A and j∈B, respectively, while minimizing the
probability that i≠j. We prove tight bounds for this question, which in turn imply
tight bounds for correlated sampling. Though we settle basic questions about the two problems,
our formulation leads to more fine grained questions that remain open.
},
}