Dana Moshkovitz: An Overview of (Some of) My Research


My Post-Doc


Subhash Khot and I researched a certain approach for proving the Unique Games Conjecture. We proved an NP-hardness result for a problem on real equations which, we believe, might be useful as a starting point for a proof of the unique games conjecture.


The Unique Games Conjecture, that Subhash made in 2002, is that given a system of linear equations of the form:

xi+xj = c (mod k)

for large k, where all but at most d fraction of the equations can be satisfied simultaneously, it is NP-hard to find an

assignment to the variables that satisfies even an e fraction of the equations.


The equations are “unique” in the sense that an assignment to any one of the two variables appearing in an equation uniquely determines a satisfying assignment to the other variable.

The conjecture was originally phrased for any unique tests, and not necessarily for linear equations with two variables per equation. However, the two formulations were shown to be equivalent by Khot, Kindler, Mossel and O’Donnell.


Under the conjecture, one can prove tight NP-hardness of approximation for a large number of problems that otherwise do not have such proofs. Examples include Max-Cut, Vertex-Cover, and other problems where there are constraints posed on two variables at a time. Under the conjecture, a certain semi-definite programming based algorithm is optimal for any constraint satisfaction problem (Raghavendra, 2008).


Our approach (see paper) focuses on the study of the natural problem of approximate real linear equations: Given a system of real homogeneous linear equations,

a1x1+…+ aqxq = 0

find a (non-trivial) assignment approximately satisfying (i.e., a1x1+…+ aqxq » 0) as many of the equations as possible. Note that for real equations, rather than equations over a finite field, it makes sense to consider approximate satisfaction, and not just exact satisfaction.


Specifically, assume that all but at most d fraction of the equations can be satisfied exactly by an assignment that is far from all-zero, find an assignment far from all-zero that approximately satisfies even just 0.9 fraction of the equations.


·         “Far from all-zero” means that (sum xi2)/n = 1 (n is the total number of variables), and that for most equations, the sum-of-squares of the variables in the equation is at least 0.001. Because of homogeneity, the all-zero assignment trivially satisfies all the equations exactly. We do not wish to consider this trivial assignment or variants of it.

·         “Approximate satisfaction” means that |a1x1+…+ aqxq|<a, for a small margin parameter a.


A simple semi-definite programming based algorithm is able to solve this problem with margin a=O(d1/2). Interestingly, the same margin is achieved no matter what the number of variables per equation, q>1, is. In contrast, for discrete constraint satisfaction problems, better approximations are usually known for the case q=2 (for which we only know hardness by the unique games conjecture), as opposed to the case q>2 (for which we know NP-hardness).


We prove the NP-hardness of the approximate linear equations problem for margin a=0.0001 d1/2, for q>2.


We suggest the following program for proving the Unique Games Conjecture:

1.   Prove NP-hardness for q>2, which we did.

2.   Prove NP-hardness for q=2, perhaps by reduction from step 1 – this is what we don’t know to do.

3.   Apply parallel repetition to deduce the Unique Games Conjecture. This is relatively easy provided step 2.


A few remarks about the construction for q>2:

·         The constraint graph induced on the variables is not an expander, which we know has to happen for hard unique games (this was proved by Arora, Khot, Kolla, Steurer, Tulsiani and Vishnoi, 2008).

·         The blow-up in the reduction is n to npoly(1/d), which we know has to happen for unique games, if the conjecture is true (this was proved by Arora, Barak and Steurer, 2010).



My PhD


Ran Raz and I showed that approximating Max-3SAT and other problems exhibit a sharp phase-transition in their computational hardness.

That is, if you assume (exact) 3SAT takes exponential-time 2W(n), and you plot the approximation factor on one axis,

and the time complexity of achieving this approximation on the other axis, then the graph for Max-3SAT looks like this:


That is, obtaining any approximation better than 7/8+o(1) takes (nearly-)exponential time 2n^{1-o(1)}.

Obtaining (7/8)-approximation is polynomial (in fact, linear) time, due to a simple random setting algorithm.

And the drop from exponential time to polynomial time is sharp: it occurs at a window of width o(1) at 7/8.


By the work of Håstad in 1997, it was known that any approximation better than 7/8+e of Max-3SAT for a constant e>0 is NP-hard, and the lower bound on the time complexity was 2n^{f(e)} for some fixed increasing function f of e.

This left open the possibility that the computational threshold at 7/8 is coarse, and that the time complexity depends super-polynomially on the quality of approximation. In particular, it was open whether a sub-exponential time algorithm could be given for sufficiently loose constant approximation e<1/8.


Our work eliminates these possibilities, and shows that the computational phase transition is sharp.


It does so not just for Max-3SAT, but for all of Håstad’s problems, and for many other problems (e.g., linear equations over finite fields). Our work offers an efficient alternative to Raz’s parallel repetition theorem, and thus it works for every problem whose NP-hardness of approximation is proven by composition of Raz’s verifier with the long-code. The latter method is today the mainstream method to prove NP-hardness of approximation, and so our results are widely applicable.


What gets into the proof? For the few years before we obtained this result, we developed techniques for randomness-efficiency in algebraic tests. These techniques, together with a new composition method, yielded the result.


The main algebraic testing problem we considered was low degree testing:

given a finite field F and natural numbers m and d, as well as a function f:Fm -> F, what is the relation between f’s proximity (in Hamming distance) to a polynomial of degree d, and its expected proximity to degree d when restricted to a random constant-dimensional subspace (e.g., a line or a plane)?


One direction of this relation is relatively easy: if f agrees with a degree d polynomial on a large fraction of the points in Fm, then when restricted to on almost all the lines, f would be close to degree d.

The other direction is trickier: suppose that when restricted to a noticeable fraction of the lines, the function f is close to degree d. Does this mean that f is close to degree d on Fm?

In 1997, Arora and Sudan, as well as Raz and Safra, proved that this is indeed the case. The “global proximity” and the “local proximity” are essentially the same. This was after Sudan and Rubinfeld, and others, proved that for the case of local proximity close to 1.


We devised a small collection of constant-dimensional subspaces, of size |Fm|1+o(1) (rather than ~|Fm|2) such that the theorem still holds. Later we showed how to use this theorem to derive randomness efficient PCPs.