MSR/MIT Theory Reading Group
This seminar series is now over and replaced by the Harvard/MIT/MSR Theory Reading Group.
Goal: The goal of this reading group is to learn in significant depth about (new/exciting) developments in theoretical computer science and related areas.Formula: Each meeting consists of a three-hour-long whiteboard talk given by the speaker. Talks tend to be informal in nature and quite interactive. In particular, questions from the audience are strongly encouraged. There is a break in the middle – it is not impolite to leave during this break. Ah...and we have pizza!
NEW Time and place: Unless otherwise noted, the meetings will be held on Fridays 12:45 pm–3:45 pm. Starting this spring the plan is to alternate the location between MSR New England (Barton room on the 1st floor of One Memorial Drive building) and MIT (Stata Center, Hewlett room, 32-G882). Note: Pizza arrives at 12:30pm.
Organizers: Boaz Barak, Ankur Moitra, Dana Moshkovitz, Ronitt Rubinfeld, Madhu Sudan.
Mailing list: There is a mailing list where the announcements of upcoming talks, as well as, other information related to the reading group appear. To subscribe to this list, visit this page. Please note that the mailing list is intended only for local participants. So please subscribe using an email that lets us know you are local, or send email to madhu letting him know who, and where, you are.
Upcoming Talks:
- No upcoming talks
Past Talks:
-
Tuesday, September 1, 2015: Speaker:
Nati Linial
(Hebrew U.).
Topic:
Random simplicial complexes.
Why do graphs appear in most applications of mathematics to the real world? One reason is that many large systems that we study are defined by pairwise interactions of their constituents. E.g., people in social networks, companies in trading, interacting proteins in biological systems etc. But what if the underlying interactions involve more than two parties? If you still model the situation using graphs, as people often do, this description is necessarily lossy. You can also appeal to the theory of hypergraphs where hyperedges can have any cardinality, not just two (as in graphs). Unfortunately the theory of hypergraphs is not nearly as developed as graph theory, so we focus on simplicial complexes - hypergraphs in which a subset of a hyperedge must also be a hyperedge. Simplicial complexes can be viewed geometrically and in particular a graph is just a one-dimensional simplicial complex. My main story is how we developed a theory of random d-dimensional simplicial complexes that coincides, for d=1, with Erdos and Renyi's G(n,p) theory.
It is known that the threshold for connectivity in G(n,p) is p=log n/n and the first result in this theory is that for the analogous question in d dimensions the threshold is at p=d log n/n. The next challenge was to find the high-dimensional counterpart of the phase transition of random graphs and the emergence of the giant component at p=1/n. Here the story in general dimension is more complex and we have recently finished deciphering it.
In the first half of my lecture (about one hour) I give a computer presentation of the main results in this area. After a short break I will move to a board presentation of some of the ideas in the main proofs. All the necessary background will be introduced.
My collaborators in this project are: R. Meshulam and T. Luczak and students: M. Rosenthal, L. Aronshtam and Y. Peled. -
Friday, July 24, 2015: Speaker:
Pravesh Kothari
(U.T. Austin).
Topic:
Sum of Squares Lower Bounds from Pairwise Independence.
The Sum of Squares (SoS) Algorithm is a family of algorithms parametrized by a number d called the degree where the d^{th} algorithm takes n^{O(d)} time to execute. It can be seen as a common generalization and extension of both linear programming and spectral techniques and has been remarkably successful in combinatorial optimization capturing the state of the art algorithms for Sparsest Cut, Max-Cut, Unique Games/Small Set Expansion etc.
An instance of a constraint satisfaction problem consists of a predicate P:{0,1}^k --> {0,1}, n Boolean variables and a collection of ordered k-tuples C_1, C_2, ...,C_m on literals on the variables called as the constraints. The goal is to come up with an assignment for each variable so that P(C_i) = 1 for as many constraint C_i as possible. A predicate P is pairwise independent if there is a balanced pairwise independent probability distribution on its satisfying assignments. Such predicates are very abundant, in that a random predicate on k variables with ~ k^2 satisfying assignments is pairwise independent with high probability.
This talk focuses on showing that despite its remarkable success in combinatorial optimization, for constraint satisfaction problems on pairwise independent predicates, the SoS algorithm requires fully exponential time to outperform the simple algorithm that returns a random Boolean value for each variable x_i.
Specifically, there exists an instance I of Max-P CSPs on n variables where P is any pairwise independent predicate, such that no assignment can satisfy more than ~ |P^{-1}|/2^k fraction of the constraints of I but the SoS algorithm of Theta(n) degree cannot certify that I is unsatisfiable. Interestingly, it is not presently known whether all pairwise independent CSPs are NP-hard to approximate beyond the random assignmen t threshold.
I will start by discussing some background facts about the SoS algorithm and CSPs; the talk will be self-contained and will not require any special background.
This is based on joint work with Boaz Barak and Siu On Chan. -
Friday, July 17, 2015: Speaker:
Mrinal Kumar
(Rutgers).
Topic:
Recent progress on arithmetic circuit lower bounds and exponential lower bounds for depth five circuits over finite fields.
Starting with a beautiful result of Gupta et al in 2012, the last few years have seen exciting progress on the problem of proving lower bounds for interesting special classes of homogeneous depth four arithmetic circuits. These lower bounds are particularly interesting since they seem to come close to resolving the algebraic analog of the P vs NP conjecture.
I will briefly talk about these developments and end with a discussion of a recent exponential lower bound for general homogeneous depth 5 arithmetic circuits over small finite fields. These are the first superpolynomial lower bounds for this model over any field other than F_2.
The talk will be based on a joint work with Ramprasad Saptharishi, and would be mostly accessible to a general theory audience. -
Friday, July 10, 2015: Speaker:
Noga Ron-Zewi
(IAS).
Topic:
High-rate locally-correctable and locally-testable codes with sub-polynomial query complexity.
Locally-correctable codes (LCCs) and locally-testable codes (LTCs) are special families of error-correcting codes that admit extremely efficient, i.e., sub-linear time, algorithms for error correction and detection, respectively, while making a few queries to the received word. These codes were originally studied in complexity theory because of their close relationship to program checking and probabilistically checkable proofs (PCPs), but subsequently they were extensively studied for their own sake.
In this work, we construct the first locally-correctable codes and locally-testable codes with constant rate, constant relative distance, and sub-polynomial query complexity and running time. Specifically, we show that there exist binary LCCs and LTCs with block length n, constant rate (which can even be taken arbitrarily close to 1), constant relative distance, and query complexity and running time (sqrt(logn))$. Previously such codes were known to exist only with query complexity and running time ^{beta}$ (for constant $eta > 0$), and there were several, quite different, constructions known.
Along the way, we also construct LCCs and LTCs over large (but constant) size alphabets, with the same query complexity and running time (sqrt(logn))$, which additionally have the property of approaching the "Singleton bound", so they have almost the best-possible relationship between their rate and distance. This has the surprising consequence that asking for a large alphabet error-correcting code to further be an LCC or LTC does not require any sacrifice in terms of rate and distance! Such a result was previously not known for any sub-linear query complexity.
Time permitting we will also discuss further recent progress on LTCs.
Joint work with Swastik Kopparty, Or Meir and Shubhangi Saraf
-
Friday, June 26, 2015: Speaker:
Ben Rossman
(NII, Tokyo).
Topic:
Correlation Bounds Against Monotone NC^1.
Product distribution are conjectured to be a source of hard instances for important monotone problems like SAT and CLIQUE. Yet despite a long history of lower bounds in monotone models of computation, average-case lower bounds under product distributions have been an elusive goal. In recent work, we obtained the first results in this setting: correlation bounds against the class Monotone NC^1 of polynomial-size log-depth monotone circuits.
Our main theorem is a lower bound for the average-case k-CYCLE problem on G(n,p). The proof technique combines the “pathset complexity” framework of [R.14] with a new notion of “persistent minterms” of a monotone function under evolving monotone noise. Using O’Donnell’s hardness amplification theorem, we then obtain an optimal 1/2 + n^{-1/2+o(1)} correlation bound for an explicit n-variable monotone function under the uniform distribution. Finally, we extend this result to the negation-limited setting, achieving a lower bound against NC^1 circuits with (1/2 - o(1))*log(n) negation gates. (This is half optimal, since log(n) negations are sufficient in general for any function in NC^1.)
This talk will begin with an overview of the history and challenges of proving monotone circuit lower bounds. I will give an overview of the pathset complexity framework and then mainly discuss the new contributions of the CCC’15 paper.
-
Friday, April 3, 2015: Speaker:
Abhishek Bhowmick
(U.T. Austin).
Topic:
The list decoding radius of Reed Muller codes over small fields.
The list decoding problem for a code asks for the maximal radius up to which any ball of that radius contains only a constant number of codewords. The list decoding radius is not well understood even for well studied codes, like Reed-Solomon or Reed-Muller codes. The Reed-Muller code RM_{F}(n,d) for a finite field F is defined by n-variate degree-d polynomials over F. As far as we know, the list decoding radius of Reed-Muller codes can be as large as the minimal distance of the code. This is known in a number of cases: The Goldreich-Levin theorem and its extensions established it for d=1 (Hadamard codes); Gopalan, Klivans and Zuckerman established it for the binary field F_2; and Gopalan established it for d=2 and all fields. In a recent work with Lovett, we prove the conjecture for all constant prime fields and all constant degrees. The proof relies on several new ingredients, including higher-order Fourier analysis.
This is based on joint work with Shachar Lovett. -
Friday, February 13, 2015: Speaker:
Moses Charikar
(Princeton).
Topic:
Spectral Embedding of k-Cliques, Graph Partitioning and k-Means .
We introduce and study a new notion of graph partitioning, intimately connected to k-means clustering. Informally, our graph partitioning objective asks for the optimal spectral simplification of a graph as a disjoint union of k normalized cliques. It is a variant of graph decomposition into expanders (where expansion is not measured w.r.t. the induced graph). Optimizing this new objective is equivalent to clustering the effective resistance embedding of the original graph. Our approximation algorithm for the new objective is closely related to spectral clustering: it optimizes the k-means objective on a certain smoothed version of the resistive distance embedding. We also show that spectral clustering applied directly to the original graph gives guarantees for our new objective function.
In order to illustrate the power of our new notion, we show that approximation algorithms for our new objective can be used in a black box fashion to approximately recover a partition of a graph into k pieces given a guarantee that a good partition exists with sufficiently large gap in internal and external conductance.
In the first half of this talk, I will introduce our new graph partitioning objective and describe our results without proof. I will also discuss spectral clustering, various theoretical explanations for its success and how our results fit into this body of work. I'll briefly
discuss results from a preliminary implementation of our new algorithm. I will also discuss the application of finding a good partition in a well clustered graph. In the second half of the talk, I will explain how the new graph partitioning objective arose from a study of the k-means objective, and I will show as many proofs as time allows.
Joint work with Pranjal Awasthi, Ravishankar Krishnaswamy, and Ali Kemal Sinop. -
Friday, December 19, 2014: Speaker:
Nike Sun
(MSR & MIT).
Topic:
The exact k-SAT threshold for large k.
We establish the random k-SAT threshold conjecture for all k exceeding an absolute constant k(0). That is, there exists a single critical density alpha_s(k) such that a random k-SAT formula of clause density alpha is with high probability satisfiable for alpha < alpha_s(k), and unsatisfiable for alpha > alpha_s(k). The threshold alpha_s(k) matches the explicit prediction derived by statistical physicists on the basis of the so-called `one-step replica symmetry breaking' (1RSB) heuristic. In the talk I will explain the main obstacles in computing the threshold, and describe how they are overcome in our proof. No prior background in statistical physics will be assumed.
Joint work with Jian Ding and Allan Sly.
-
Friday, December 12, 2014: Speaker:
Omri Weinstein
(Princeton).
Topic:
Information Complexity, Direct Sums and Products and the Quest for Interactive Compression.
Over the past three decades, communication complexity had a profound impact on nearly every field of theoretical computer science, and constitutes one of the few known methods for proving unconditional lower bounds. As such, developing new tools in communication complexity is a promising approach for making progress in other computational models, such as circuit complexity, streaming algorithms, property testing, data structures and VLSI chip design.
One striking example of such tool is information theory, introduced by Shannon in the late 1940's in the context of the one way data transmission problem. Shannon's work revealed the intimate connection between information and communication, namely, that the amortized transmission cost of a random message is equal to the amount of information it contains. Classical information theory, however, does not readily convert to interactive setups, where two (or more) parties must engage in a multi-round conversation in order to accomplish a desirable task.
In this talk I will give an introduction to Information Complexity, an interactive analogue of Shannon's information theory, which has recently found many applications in theoretical computer science and in particular for proving "black-box" hardness results (hardness amplification), also known as Direct Sum and Product theorems. In the communication complexity model, proving such theorems turns out to be equivalent to the question of whether Shannon's compression theory indeed extends to the *interactive* setup.
I will survey some of the exciting recent progress in the field, including upper and lower bounds for the fascinating problem of interactive compression, and its practical and philosophical implications to computer science and beyond.
No prior knowledge will be assumed in this talk. -
Friday, December 5, 2014: Speaker:
Ilya Razenshteyn
(MIT).
Topic:
Sketching and Embedding are Equivalent for Norms.
Imagine the following communication task. Alice and Bob each have a point from a metric space. They want to transmit a few bits and decide, whether their points are close to each other or are far apart. Of particular interest are sketching protocols: Alice and Bob both compute short summaries of their inputs and then a referee, given these summaries, makes the decision; sketches are very useful for the nearest neighbor search, streaming, randomized linear algebra etc. Indyk (FOCS 2000) showed that for the l_p spaces with 0 < p <= 2 the above problem allows a very efficient sketching protocol. Consequently, any metric that can be mapped into the l_p space with all the distances being approximately preserved has a good protocol as well.
I will show that for normed spaces (a very important class of metric spaces) embedding into l_p is the only possible technique for solving the communication problem. Slightly more formally, we show that any normed space that admits a good communication (in particular, sketching) protocol for distinguishing close and far pairs of points embeds well into l_p with p being close to 1. The proof uses tools from communication complexity and functional analysis.
As a corollary, we will show communication lower bounds for the planar Earth Mover's Distance (minimum-cost matching metric) and for the trace norm (the sum of the singular values of a matrix) by deriving them from the (known) non-embeddability theorems and (the contrapositive of) our result.
The talk is based on a joint paper with Alexandr Andoni and Robert Krauthgamer (arXiv:1411.2577). All the necessary definitions and tools will be introduced in a great detail throughout the talk. In particular, no prior knowledge about metric embeddings is assumed. -
Friday, November 21, 2014: Speaker:
Ankur Moitra
(MIT).
Topic:
The Threshold for Super-resolution.
Super-resolution is a fundamental task in imaging, where the goal is to extract fine-grained structure from coarse-grained measurements. Here we are interested in a popular mathematical abstraction of this problem that has been widely studied in the statistics, signal processing and machine learning communities. We exactly resolve the threshold at which noisy super-resolution is possible. In particular, we establish a sharp phase transition for the relationship between the cutoff frequency (m) and the separation (Delta). If m > 1/Delta + 1, our estimator converges to the true values at an inverse polynomial rate in terms of the magnitude of the noise. And when m < (1-epsilon) /Delta no estimator can distinguish between a particular pair of Delta-separated signals even if the magnitude of the noise is exponentially small.
Our results involve making novel connections between extremal functions and the spectral properties of Vandermonde matrices. We establish a sharp phase transition for their condition number which in turn allows us to give the first noise tolerance bounds for the matrix pencil method. Moreover we show that our methods can be interpreted as giving preconditioners for Vandermonde matrices, and we use this observation to design faster algorithms for super-resolution. We believe that these ideas may have other applications in designing faster algorithms for other basic tasks in signal processing. -
Friday, November 14, 2014: Speaker:
Dana Moshkovitz
(MIT).
Topic:
Candidate Lasserre Integrality Gap For Unique Games.
The Unique Games Conjecture of Khot is the basis of remarkable inapproximability results. In recent years, researchers have explored possible algorithms for refuting it based on the Lasserre (aka Sum of Squares) hierarchy of semidefinite programs. In the talk we'll show a candidate hard instance for algorithms based on the Lasserre hierarchy. Joint work with Subhash Khot.
-
Friday, November 7, 2014: Speaker:
Aaron Potechin
(MIT).
Topic:
Sum of Squares Lower Bounds for the Planted Clique Problem.
The planted clique problem asks whether or not we can find a k-clique which is planted in a random G(n,1/2) graph. Although the expected size of the largest clique in a random graph is only 2lg(n), we currently do not know of any polynomial time algorithm which can find a planted clique of size much smaller than n^(1/2).
The sum of squares hierarchy gives a hierarchy of approximation algorithms, each more powerful than the last. These algorithms are some of the most powerful approximation algorithms we know of and their performance on many problems is not well understood. In fact, until very recently there were no non-trivial bounds on the performance of the sum of squares hierarchy on the planted clique problem.
In this talk, I will present the first non-trivial bounds for this question, describing the planted clique problem, the sum of squares hierarchy, and how we can show that the sum of squares hierarchy fails to find the planted clique if its size k is sufficiently small.
Joint work with Raghu Meka and Avi Wigderson
-
Friday, August 15, 2014: Speaker:
Rakesh Vohra
(U. Penn.).
Topic:
Rounding in Matching Problems from Economics.
Matching problems in Economics, frequently differ from traditional matching problems in that the matching with desired properties one seeks is not the solution to linear optimization problem defined over the set of feasible matchings. The difference arises from the role of preferences. Nevertheless, one can still use familiar LP rounding techniques to obtain matchings that come `close' to the desired matchings.
1-sided matching problem, because only one side has preferences, easily fit into the traditional optimization framework, and so one can employ LP rounding techniques in these applications. 2-sided matching problems (both sides have preferences) present a challenge, because the matchings of interest may not exist. I'll describe how Scarf's lemma combined with well known rounding arguments can be used to `solve' such 2-sided matching problems.
I'll assume a knowledge of linear programming but no prior knowledge about economic aspects of matching.
-
Friday, August 1, 2014: Speaker:
Siu On Chan
(MSR).
Topic:
Lower bounds for Extended Formulations.
Linear and semidefinite programs can sometimes be rewritten to significantly reduce the number of inequalities in the programs. Such a rewriting is known as an extended formulation. In this talk I will survey recent results in this area, such as lowerbounds for Max-Cut, Max-Clique, and Max-CSPs (but perhaps no time for Perfect Matching). I will discuss the connection of extended formulations to LP and SDP hierarchies, non-negative rank, and common information.
Based mostly on Fiorini–Massar–Pokutta–Tiwary–de Wolf STOC 2012, Braun–Pokutta FOCS 2013, Chan–Lee–Raghavendra–Steurer FOCS 2013, and Lee–Raghavendra–Steurer–Tan CCC 2014.
-
Friday, July 25, 2014: Speaker:
Bernhard Haeupler
(MSR).
Topic:
How to have a conversation despite noise.
Error correcting codes have transformed the way we transfer information. In particular, they give computationally efficient ways to protect one-way communications against noise by adding minimal amounts of redundancy. This talk will survey and explain recent results on coding schemes that do the same for interactive communications, that is, which protect interactive conversations against arbitrary errors/noise. The interactive setting is much harder than its classic one-way counterpart because a single (entirely) corrupted message can completely derail a conversation. In addition to that, decisions on when and how to add redundancy become harder because parties do not know in advance what they will send / talk about in a conversation.
This will be an interactive board talk which will briefly recall some old(er) results which show the existence/feasibility of coding schemes for conversations but then focus on understanding brand-new results [Ghaffari, H., Sudan, STOC’14; Ghaffari, H., FOCS’14; H., FOCS’14] which include the first computationally efficient coding schemes tolerating the maximal amount of noise in various settings as well as the first coding schemes with optimal communication rate for random and adversarial errors.
Come and join the fun! No prior knowledge on the topic is required/expected.
-
Friday, July 18, 2014: Speaker:
Aaron Sidford
(MIT).
Topic:
Path Finding Methods for Linear Programming.
In this talk, I will present a new algorithm for solving linear programs. Whereas the worst case convergence rate of previous state of the art linear programming algorithms depended on the larger of the number of variables and the number of constraints, our algorithms depend only the smaller (up to polylogarithmic factors).
While this talk will be focused on linear programming techniques, I will motivate this study through the maximum flow problem. In particular, I will explain how the techniques presented can be used to achieve a running time of Õ(m sqrt(n) log^2(U)) for solving both the maximum flow and the more general minimum cost flow problems on directed graph with m edges, n vertices, and capacity ratio U, thereby improving upon the previous fastest running time for solving these problems in the slightly dense regime.
This talk will assume no previous exposure to linear programming algorithms, convex optimization, or graph theory and will cover much of the basics regarding path following methods for solving linear programs.
This talk reflects joint work with Yin Tat Lee. -
Friday, May 2, 2014: Speaker:
Or Meir
(IAS).
Topic:
Toward Better Formula Lower Bounds: An Information Complexity Approach to the KRW Composition Conjecture.
One of the major open problems in complexity theory is proving super-polynomial lower bounds for circuits with logarithmic depth (i.e.,{P} otsubseteq mathbb{NC_1}$ ). This problem is interesting both because it is tightly related to understanding the power of parallel computation and of small-space computation, and because it is one of the first milestones toward proving super-polynomial circuit lower bounds.
Karchmer, Raz, and Wigderson [KRW95] suggested to approach this problem by proving the following conjecture: given two boolean functions f and g, the depth complexity of the composed function circ f$ is roughly the sum of the depth complexities of f and g. They showed that proving this conjecture would imply that {P} ot subseteq mathbb{NC_1}$.
As a starting point for studying the composition of functions, they introduced a relation called the universal relation, and suggested to study the composition of two universal relations. This suggestion proved fruitful, and indeed, an analogue of the KRW conjecture for the universal relation was proved later by Edmonds et. al. [EIRS01] and H{aa}stad and Wigderson [HW93]. However, studying the composition of two functions seems more difficult, and the KRW conjecture is still wide open.
In this work, we make a natural step in this direction, which sits between what is known and the real goal: we show that an analogue of the KRW conjecture holds for the composition of a one function with one universal relation. We also suggest a candidate for the next step and provide initial results toward solving it. We hope that these results will revive the study of the KRW conjecture.
Our main technical contribution is developing an approach based on the notion of information complexity for analyzing KW games -- communication problems that are closely related to questions on circuit depth and formula complexity. Information complexity has proved recently to be a powerful tool to make major progress on another long-standing "economy-of-scale" problem in communication complexity: the direct-sum conjecture. We develop general tools for analyzing the information complexity of KW relations, which may be of independent interest.
Joint work with Dmitry Gavinsky, Omri Weinstein, and Avi Wigderson.
-
Friday, April 18, 2014: Speaker:
Amir Shpilka
(Technion).
Topic:
Depth reduction for arithmetic circuits.
The goal of this talk is to present several depth-reduction results for arithmetic circuits. We will start by proving the seminal result of Valiant, Skyum, Berkowitz and Rackoff that VP=VNC^2, namely, that arithmetic circuits can be parallelized. We will then go over the more recent reductions to depth 4 and depth 3 circuits by Agrawal and Vinay and by Gupta, Kamath, Kayal and Saptharishi, respectively.
Time permitting we shall also discuss some of the recent lower bounds proved for depth-4 circuits. -
Friday, April 11, 2014: Speaker:
Lorenzo Orrechia and Zeyuan Allen Zhu
(MIT).
Topic:
A Survey of First-Order Iterative Methods in the Design of Fast Graph Algorithms.
Full title: A Survey of First-Order Iterative Methods in the Design of Fast Graph Algorithms: from Multiplicative Weight Updates to Nesterov’s Algorithm via Regularization.
Fast iterative methods from Convex Optimization play a crucial role in a number of recent breakthroughs in the design of nearly-linear-time algorithms for fundamental graph problems, such as maximum flow and graph partitioning. Multiplicative Weight Updates, Euclidean and Non-Euclidean Gradient Descent, and Nesterov’s Method have become a mainstay in the construction and analysis of fast algorithms.
However, until recently, the relation between these different methods and the reason for their success have been somewhat murky. What is the exact relation between Multiplicative Weight Updates and Gradient Descent? Why do Multiplicative Weight Updates show up in so many settings? What is the intuition behind Nesterov’s iterative algorithm that achieves the asymptotically optimal iteration count for smooth convex functions (hint: it isn’t just an algebraic trick)? The answer to these questions was not clear.
In this survey, we will provide answers by presenting a unified framework that reveals the power and limitations of each method and provides much needed geometric intuition. Among other insights, we will explain how Multiplicative Weight Updates are a particular instance of a dual iterative method, known as Mirror Descent, and how Nesterov’s algorithm can be naturally derived as a combination of Mirror and Gradient Descent.
-
Friday, March 28, 2014: Speaker:
Dana Moshkovitz
(MIT).
Topic:
An Algebraic Parallel Repetition Theorem.
A long line of work in Theoretical Computer Science shows that a function is close to a low degree polynomial iff it is close to a low degree polynomial "locally". This is known as "low degree testing", and is the core of the algebraic approach to construction of PCP.
We prove a parallel repetition theorem for low degree testing achieving optimal exponential decay of the error probability. We instantiate this theorem to obtain a low degree tester whose error probability is much smaller than any known local tester. A key lemma is an analysis of the sampling properties of the incidence graph of degree-k curves and k'-tuples of points in a finite space F^m.
We show that the Sliding Scale Conjecture in PCP, namely the conjecture that there are PCP verifiers whose error is exponentially small in their randomness, would follow from a derandomization of our parallel repetition theorem.
-
Friday, March 14, 2014: Speaker:
Constantinos Daskalakis
(MIT).
Topic:
Learning structured distributions from a constant number of samples.
I will overview results on learning structured single-dimensional distributions, spending most of my time with two simple families:
- Poisson Binomial distributions: these are sums of independent but not necessarily iid Bernoullis; and
- SIIRVs: these are sums of independent integer valued random variables.
While both are well-studied families of distributions, I will argue that Berry-Esseen like results fall short from characterizing their general structure. Thus, I will offer stronger finitary central limit theorems, and will show how these are applicable to learning. In particular, I will show that these distributions can be learned from a number of samples that is independent of the number of variables that are summed over. I may also discuss applications to Nash equilibria of anonymous games.
Based on joint works (in order of intensity) with Papadimitriou, Diakonikolas, Servedio, O’Donnell and Tan -
Friday, February 28, 2014: Speaker:
Emmanuel Abbe
(Princeton).
Topic:
Concentration results in random CSPs, soft CSPs and applications.
I will start with an overview of the basic phase transition and concentration results for random k-SAT, and discuss more recent results for planted k-SAT. I will then discuss "soft CSPs", where the constraints are not hard but probabilistic, and where the goal is not to decide satisfiability but to quantify the likelihood of solutions. Applications to coding theory and graph clustering will be discussed, as well as proof techniques, in particular the interpolation method.
-
Friday, February 21, 2014: Speaker:
Venkatesan Guruswami
(Carnegie-Mellon University).
Topic:
List decoding by evading subspaces.
A natural "folded" variant of the classical Reed-Solomon codes are known to be list-decodable with optimal redundancy (in particular, only epsilon higher than the worst-case error fraction, for any desired epsilon > 0). The last couple of years have seen improvements of this result that reduce the alphabet and list sizes to a constant depending only on epsilon.
The new developments are based on a combination of two ingredients: (i) a simple yet versatile linear-algebraic approach to list error-correction that first pins down the candidate solutions to a low-dimensional or well-structured subspace, and (ii) pre-coding the messages to belong to appropriate subspace-evasive sets or subspace designs that have small intersection with any subspace that could be output by the linear-algebraic list decoder.
In this talk, I'll survey some of these results. For the technical part, I will present the linear-algebraic algorithm to list decode folded Reed-Solomon codes, and hope to illustrate the ideas behind more advanced extensions (to variants of algebraic-geometric codes) by describing the linear-algebraic algorithm and its combination with subspace designs in the context of Reed-Solomon codes with evaluation points belonging to a subfield.
The talk should be self-contained. Partly based on joint works with Chaoping Xing (STOC'12, '13) and Carol Wang (IEEE Trans. Info. Theory 59(6): 2013) -
Friday, February 14, 2014: Speaker:
Justin Thaler
(Simons Institute).
Topic:
Approximate Degree and the Method of Dual Polynomials.
The eps-approximate degree of a Boolean function is the minimum degree of a real polynomial that point-wise approximates f to error eps. Approximate degree has wide-ranging applications in theoretical computer science, from computational learning theory to communication, query, and circuit complexity. Despite its importance, our understanding of approximate degree remains somewhat limited, with few general results known.
The focus of this talk will be on a relatively new method for proving lower bounds on approximate degree: specifying emph{dual polynomials}, which are dual solutions to a certain linear program capturing the approximate degree of any function. After surveying earlier work on approximate degree, I will describe how the method of dual polynomials has recently enabled progress on several open problems.
Joint work with Mark Bun.
-
Friday, December 6, 2013: Speaker:
Larry Guth
(MIT).
Topic:
Polynomial methods in incidence geometry (Note special time: 3:30pm-6:30pm).
Incidence geometry is a part of combinatorics studying the possible intersection patterns of lines, circles, or other simple shapes. For example, given L lines in the plane, what is the maximum possible number of points that lie in at least r lines? This problem was solved up to a constant factor by a fundamental theorem of Szemeredi and Trotter from the early 80's. Nevertheless, there are many simple open problems - for example, if we replace lines by circles in the problem above.
We will describe recent work in this area using polynomials in a somewhat unexpected way. This approach was influenced by work in error-correcting codes, where interesting ideas about polynomials are used to make good codes and good methods of decoding. These methods have led to good new estimates about the intersection patterns of lines in 3-dimensional space. -
Friday, November 22, 2013: Speaker:
Richard Peng
(MIT).
Topic:
Algorithm Design using Spectral Graph Theory.
Spectral graph theory is the interplay between linear algebra and combinatorial graph theory. Laplace's equation and its discrete form, the Laplacian matrix, appear ubiquitously in mathematical physics. Due to the recent discovery of very fast solvers for these equations, they are also becoming increasingly useful in combinatorial optimization, computer vision, computer graphics, and machine learning.
This talk will present the first parallel algorithm for solving linear systems in graph Laplacians that runs in polylogarithmic time and nearly-linear work. The heart of the algorithm is a construction of a sparse approximate inverse chain for the input matrix: a sequence of sparse matrices whose product approximates its inverse. Whereas other fast algorithms for solving these systems of equations rely on tree embeddings, this algorithm only requires constructing spectrally similar graphs. This is joint work with Dan Spielman.
The first half of this talk will also survey two decades of progress in combinatorial preconditioning, which combines numerical and combinatorial algorithms via spectral graph theory. -
Friday, November 15, 2013: Speaker:
Ben Rossman
(NII, Tokyo).
Topic:
Formulas vs. Circuits for Small Distance Connectivity.
It is an elementary fact that every (unbounded fan-in) depth-d circuit of size S is equivalent to a depth-d formula of size at most S^d. Karchmer and Wigderson (1988) proved a converse to this fact in the *monotone* setting: a blow-up of S^Omega(d) is necessary to simulate monotone depth-d circuits by formulas for all S = n^O(1) and d = O(log n). In recent work, we give the first partial converse in the *boolean* setting: a blow-up of S^Omega(d) is necessary to simulate boolean depth-d circuits by formulas for all S = n^O(1) and d <= logloglog n. This result follows from a novel formula-size lower bound for the "Distance k Connectivity" problem (given a graph G of size n with specified vertices s and t, is there a path of length <= k from s to t?). Whereas this problem has polynomial-size boolean circuits of depth O(log k), we prove that solving Distance k Connectivity by boolean formulas of depth <= log n/polyloglog n requires size n^Omega(log k) for all k <= loglog n. The proof of this result introduces a new technique ("pathset complexity") which, for the first time, strongly distinguishes between formulas and circuits in the bounded-depth boolean setting. This talk will include an overview of the relevant background in circuit complexity.
-
Friday, November 8, 2013: Speaker:
Vinod Vaikuntanathan
(MIT).
Topic:
Fully Key Homomorphic Encryption and Applications.
We introduce the notion of a fully *key*-homomorphic encryption scheme, construct such a scheme based on the learning with errors assumption, and show a number of applications, including:
- attribute-based and functional encryption schemes with super-compact keys;
- a reusable garbled circuits construction where the garbling of a circuit C has size |C| + poly(k), k being a security parameter; and
- a reusable garbled Turing machine construction where the garbling of a TM M has size |M| + poly(k), *independent* of M's running time.
In contrast, all previous constructions of (even single-use) garbled circuits suffered from a multiplicative poly(k) blowup, and there were no constructions of (even single-use) garbled Turing machines from falsifiable cryptographic assumptions.
-
Friday, November 1, 2013: Speaker:
Salil Vadhan
(Harvard).
Topic:
Locally Testable Codes and Cayley Graphs.
We give two new characterizations of (F_2-linear, smooth) locally testable error-correcting codes in terms of Cayley graphs over (F_2)^h
1. A locally testable code is equivalent to a Cayley graph over (F_2)^h whose set of generators is significantly larger than h and has no short linear dependencies, but yields a shortest-path metric that embeds into l_1 with constant distortion. This extends and gives a converse to a result of Khot and Naor (2006), which showed that codes with large dual distance imply Cayley graphs that have no low-distortion embeddings into l_1.
2. A locally testable code is equivalent to a Cayley graph over (F_2)^h that has significantly more than h eigenvalues near 1, which have no short linear dependencies among them and which "explain" all of the large eigenvalues. This extends and gives a converse to a recent construction of Barak et al. (2012), which showed that locally testable codes imply Cayley graphs that are small-set expanders but have many large eigenvalues.
Joint work with Parikshit Gopalan and Yuan Zhou. Full paper at http://arxiv.org/abs/1308.5158
-
Friday, October 25, 2013: Speaker:
Yael Kalai
(MSR).
Topic:
1-round delegation for deterministic computation.
In this talk we will focus on constructing multi-prover interactive proofs (MIPs) that are sound against *no-signaling* (cheating) strategies, for any deterministic computation. Such MIPs were studied in the context of MIPs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light.
Our main motivation for studying this notion is a curious connection between such MIPs and 1-round arguments (which are computationally sound). Indeed our MIP gives a 1-round argument for every language computable in time t=t(n), where the running time of the prover is poly(t) and the running time of the verifier is n cdot polylog(t), and where soundness relies on the existence of a sub-exponentially secure (computational) PIR scheme.
This is joint work with Ran Raz and Ron Rothblum. -
Friday, October 11, 2013: Speaker:
Jelani Nelson
(Harvard).
Topic:
Dimensionality reduction via sparse matrices.
This talk will discuss sparse Johnson-Lindenstrauss transforms, i.e. sparse linear maps into much lower dimension which preserve the Euclidean geometry of a set of vectors. Applications to certain domains will also be presented, such as to numerical linear algebra.
Based on joint works with Daniel Kane (Stanford) and Huy Nguyen (Princeton). -
Friday, October 4, 2013: Speaker:
Van Vu
(Yale).
Topic:
Matrix perturbation with random noise and matrix recovery problems.
Classical matrix perturbation bounds, such as Weyl (for eigenvalues) and David-Kahan (for eigenvectors) have, for a long time, been playing an important role in various areas: numerical analysis, combinatorics, theoretical computer science, statistics, machine learning, etc.
In this talk, I am going to discuss a popular setting where the perturbation is random and the original matrix (data) is low rank (or approximately low rank). In this case, we can significantly improve the classical bounds mentioned above.
As an application, I will discuss a simple universal approach to several matrix recovery problems, such as the hidden clique problem, the hidden coloring problem, the clustering problem, the Netflix problem etc. In certain cases, our method leads to results beyond the currently known.
Joint work with S. O'rourke (Yale) and K. Wang (IMA)
-
Tuesday, August 6, 2013: Speaker:
Ryan O'Donnell
(CMU).
Topic:
Hypercontractivity made easy, and sharp threshold theorems - II.
Continuing from the previous lecture, this lecture will further develop the randomization/symmetrization technique in the setting of functions on general product probability spaces. Our motivation will be the desire to prove "sharp threshold theorems". These are the kinds of statements first proved by Friedgut for several interesting graph/hypergraph properties: E.g., for a randomly chosen k-CNF on n variables, there is a constant c such that the CNF is almost surely satisfiable if the clause density is (c/n)(1-o(1)) and almost surely unsatisfiable if the clause density is (c/n)(1+o(1)).
The key analytical tools for proving such sharp threshold theorems were developed by Friegut, Bourgain, and Hatami. I will show the proof of Bourgain's version, and explain why it is basically just the KKL Theorem with a "randomization/symmetrization twist". It took me 10 years to fully understand this theorem, but now I think it's easy! If you make to the end of this lecture, I hope you will find it easy too. -
Tuesday, July 30, 2013: Speaker:
Ryan O'Donnell
(CMU).
Topic:
Hypercontractivity made easy, and sharp threshold theorems - I.
In this lecture I'll introduce the "Hypercontractivity Theorem" for real-valued functions on the boolean cube, f : {-1,1}^n -> R and explain how you can view it as either:
- a natural strengthening of Holder's inequality;
- an isoperimetric statement saying that among small-volume sets in {-1,1}^n, Hamming balls have the smallest boundary (roughly speaking).
We'll see a new(?), essentially trivial reduction to the case of n=1, where it's easy. This means that the isoperimetric statement for {-1,1}^n follows from an analogous isoperimetric statement for {-1,1}. (!) I'll also show some applications of the Hypercontractivity Theorem, including the "KKL Theorem". Toward the end of the lecture I will show how to generalize all of these results to functions on any product probability space, in part by using the technique of "randomization/symmetrization". -
Friday, July 26, 2013: Speaker:
Amit Sahai
(UCLA).
Topic:
How to Encrypt Software, and its applications.
The goal of secure program obfuscation is to make a program ``unintelligible'' while preserving functionality. Unfortunately, 12 years ago, Barak et al. showed that the most natural simulation-based formulation of this problem is impossible to achieve. However, Barak et al. also proposed the alternative notion of indistinguishability obfuscation, which requires that given any two equivalent circuits C and C' of similar size, the obfuscations of C and C' should be computationally indistinguishable.
While indistinguishability obfuscation for general circuits has been a tantalizing open question, it has also been an enigma: It is not clear what, exactly, an indistinguishability obfuscator must hide about the circuit it transforms. Therefore, it has not been clear how useful such an obfuscator would be.
In this talk, we present a construction of indistinguishability obfuscators for all circuits using ideal lattices, and give new techniques to leverage the power of indistinguishability obfuscation for applications. In particular, we use indistinguishability obfuscation as a critical tool to resolve the open question of functional encryption for general circuits:
In functional encryption, ciphertexts encrypt inputs x and keys are issued for circuits C. Using the key SK_C to decrypt an encryption of x yields the value C(x), but does not reveal anything else about x. Furthermore, no collusion of secret key holders should be able to learn anything more than the union of what they can each learn individually. The functional encryption scheme we construct also enjoys succinct ciphertexts, which enables several other applications.
The talk will assume no previous background in cryptography, except for a general familiarity with basic cryptographic definitions.
If time permits, I will also talk about additional applications of indistinguishability obfuscation, and open problems in obfuscation.
This talk will be based on joint works with Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, and Brent Waters.
-
Friday, July 19, 2013: Speaker:
Joel Spencer
(NYU).
Topic:
Finding Needles in Exponential Haystacks.
We discuss two recent methods in which an object with a certain property is sought. In both, using of
a straightforward random object would succeed with only exponentially small probability. The new
randomized algorithms run efficiently and also give new proofs of the existence of the desired
object. In both cases there is a potentially broad use of the methodology. -
Friday, July 12, 2013: Speaker:
Matt Weinberg
(MIT).
Topic:
Understanding Incentives: Reducing Mechanism- to Algorithm-Design.
In traditional algorithm design, no incentives come into play: the input is given, and your algorithm must produce a correct output. How much harder is it to solve the same problem when the input is not given directly, but instead reported by strategic agents (who may choose to lie) whose incentives may not align with your own?
This talk will present recent results on black-box reductions from mechanism to algorithm design. That is, we show how to solve optimization problems while properly incentivizing strategic agents using only black-box access to an algorithm for (a modified version of) the same optimization problem when the input is directly given. This talk will not assume any knowledge of economics or game theory.
Based on joint works with Yang Cai and Costis Daskalakis. -
Friday, June 28, 2013: Speaker:
Yufei Zhao
(MIT).
Topic:
Green-Tao, Szemerédi, and transference.
The celebrated Green-Tao theorem states that the primes contain arbitrarily long arithmetic progressions. In this talk, I will give some background and explain the ideas of proof, including some recent simplifications due to Conlon, Fox, and myself. The main idea is a transference principle, which allows one to transfer a result about dense sets (in this case, Szemerédi's theorem on arithmetic progressions in dense subsets of integers) to one about subsets of sparse pseudorandom sets. The talk will be accessible to non-specialists in the area, and in particular to people working in algorithms/complexity.
-
Friday, June 21, 2013: Speaker:
Raghu Meka
(IAS).
Topic:
Association Schemes, Non-Commutative Polynomials and Lasserre Lower Bounds for Planted Clique.
Finding cliques in random graphs and the closely related "planted" clique variant, where a clique of size k is planted in a random G(n,1/2) graph, have been the focus of substantial study in algorithm design. Despite much effort, the best known polynomial-time algorithms only solve the problem for k ~ sqrt(n). Here we show that beating sqrt(n) would require substantially new algorithmic ideas, by proving a lower bound for the problem in the Lasserre/Sum of Squares hierarchy, the most powerful class of semi-definite programming algorithms we know of. Our (average case) lower bound uses tools from the classical theory of association schemes and some new large deviation bounds for matrix valued polynomials which could be of independent interest.
-
Friday, June 14, 2013: Speaker:
Yury Polyanskiy
(MIT).
Topic:
Hypercontractivity of spherical averages in Hamming space.
Certain conditional expectation operators (stochastic matrices) have the property, called hypercontractivity (HC), that their Lp->Lq norm is 1 for p < q. HC was first discovered by Bonami and Gross for the "Bernoulli" operator acting on functions in n-dimensional Hamming space (binary hypercube). (The "Bernoulli" operator takes a real-valued function f on the Boolean cube and maps f to g given by g(x) = E_Z [(x+Z)] where Z is a Bernoulli distributed random variable.) In this talk I will prove HC of the operator where Z is uniformly distributed on a Hamming sphere (so Hamming weight of Z is constant).
There are several applications of HC in information theory (eg, Ahslwede-Gacs'76) and CS (eg, Kahn-Kalai-Linial'88). In the first part of the talk, I will talk about another application (in hypothesis testing) and about relatively modern approaches to proving HC (via the use of "log-Sobolev inequalities" and the comparison of "Dirichlet forms"). In the second part I will talk about our proof of the main result for which the previous methods fail. Instead our proof is based on harmonic analysis on the hypercube and precise asymptotics of Krawtchouk polynomials.
Time-permitting we will also show the following application of the main result: Can we approximate a uniform distribution on the Hamming sphere by a sum of two independent copies of a low-entropy r.v. X? We will give a negative answer using our HC. -
Friday, June 7, 2013: Speaker:
Michael Kapralov
(MIT).
Topic:
CANCELLED: Approximate near neighbor search and locality sensitive hashing.
The approximate near neighbor problem is a basic problem when dealing with high-dimensional data. Formally, the (c, r)-Approximate Near Neighbor Search (ANNS) problem is defined as follows: given a collection P of N points in a d-dimensional space, build a data structure that, given any query point, reports a point within distance c.r to the query if a point within distance r exists.
Locality Sensitive Hashing (LSH) is a tool introduced by Indyk and Motwani in 1998 to solve the ANNS problem. Roughly, an LSH scheme maps the d-dimensional space to a smaller universe with the property that nearby points are more likely to collide than far away points. In this talk I will introduce the LSH scheme and show how it is used in previous works to solve the ANNS problem. I will then talk about some of our recent work on this problem. Specifically:
1) I will present a Locality Sensitive Hashing scheme (LSH) for the c-approximate near neighbor problem in R^d under l_p metric for pin (1, 2). Our scheme uses N^{1+O(1)/c^{p'}} extra space and dN^{ O(1/c^{p'})} query time, for any choice of p' < p, making significant progress towards achieving the conjectured optimum of N^{1+1/c^p} space and N^{1/c^p} time. Prior to our work the best known solution for this range of parameters achieved N^{1+1/c} space and dN^{1/c} query time.
2) (Time permitting) I will also describe some lower bounds on the parameters achievable for ANNS: I will present the approach to ANNS lower bounds via metric expansion introduced by Panigrahy, Wieder and Talwar (FOCS'10) and use it to prove lower bounds for ANNS for the l_infty metric.
Based on joint works with Piotr Indyk and Rina Panigrahy.
-
Friday, May 24, 2013: Speaker:
Aram Harrow
(MIT).
Topic:
Unique games, the Lasserre hierarchy and monogamy of entanglement.
In this talk, I'll describe connections between the unique games conjecture (or more precisely, the closely relatedly problem of small-set expansion) and the quantum separability problem. Remarkably, not only are the problems related, but the leading candidate algorithms turn out to be essentially equivalent: for unique games, the algorithm is called the Lasserre hierarchy, and for quantum separability, it is called the "k-extendable" relaxation. As a result, insights from quantum information theory may provide an unexpected route to either quasipolynomial algorithms or hardness results for unique games. I will describe conjectures in quantum information theory that would imply nontrivial results in unique games, and give my perspective on what I see as promising directions.
The talk will not assume any knowledge of quantum mechanics, or for that matter, of the unique games conjecture or the Lasserre hierarchy. It is partly based on 1205.4484 and 1210.6367. -
Friday, May 17, 2013: Speaker:
Sanjeev Khanna
(U. Penn.).
Topic:
Edge-Disjoint Paths in Networks.
A fundamental problem in combinatorial optimization is the edge-disjoint paths problem (EDP). We are given a collection of source-destination pairs in a network, and the goal is to maximize the number of pairs that can be connected by edge-disjoint paths. Even special cases of EDP correspond to non-trivial optimization problems, and the problem becomes NP-hard in highly restricted settings. A sequence of ideas developed over the past decade has led to great progress on understanding the approximability of EDP. We will survey some of this progress and highlight some outstanding remaining questions.
-
Friday, May 10, 2013: Speaker:
Udi Weider
(MSR SVC).
Topic:
On the power of two choices and its generalizations.
Suppose m balls are sequentially thrown into n bins where each ball goes into a random bin. It is well-known that the gap between the load of the most loaded bin and the average is roughly sqrt(mlogn/n), for large m. If each ball goes to the lesser loaded of two random bins, this gap dramatically reduces to O(log log n) independent of m. This phenomena is sometimes called "the power of two choices".
I will present a (biased) overview of the research done on the ``power of two choices'' paradigm and its generalizations, focusing on the tools used and the open problems which remain. -
Friday, May 3, 2013: Speaker:
Daniel Kane
(Stanford).
Topic:
The Correct Exponent for the Gotsman-Linial Conjecture.
A (degree-d) polynomial threshold function is a function of the form f(x) = sgn(p(x)) for some (degree-d) polynomial d. The noise sensitivity of a boolean function is the probability that a small change in the input will lead to a change in the output. Gotsman and Linial proposed a conjecture for the correct bounds on the noise sensitivity of a polynomial threshold function in terms of its degree and number of variables. We will discuss recent work yielding bounds nearly as good as the conjectured correct ones.
-
Friday, April 19, 2013: Speaker:
Eli Ben-Sasson and Yohay Kaplan
(Technion + MIT).
Topic:
Constant rate PCPs for circuit-SAT with sublinear query complexity.
The PCP theorem says that every NP-proof can be encoded to another proof, namely, a probabilistically checkable proof (PCP), which can be tested by a verifier that queries only a small part of the PCP. A natural question is how large is the blow-up incurred by this encoding, i.e., how long is the PCP compared to the original NP-proof. The state-of-the-art works of Ben-Sasson, Sudan and Dinur show that one can encode proofs of length n by PCPs of length (n poly log n) that can be verified using a constant number of queries.
Here we show that if the query complexity is relaxed to n^0.1, then one can construct PCPs of bit-length O(n) for circuit-SAT, and PCPs of bit-length O(n log n) for any language in NTIME(n). More specifically, for any epsilon>0 we present (non-uniform) probabilistically checkable proofs (PCPs) of bit-length O_epsilon(n) that query n^epsilon proof-bits for circuit-SAT instances of size n. Our PCPs have perfect completeness and constant soundness. This is the first constant-rate PCP construction that achieves constant soundness with nontrivial query complexity (o(n)).
Our proof replaces the low-degree polynomials in algebraic PCP constructions with tensors of transitive algebraic geometry (AG) codes. We show that the automorphisms of an AG code can be used to simulate the role of affine transformations which are crucial in earlier high-rate algebraic PCP constructions. Using this observation we conclude that any asymptotically good family of transitive AG codes over a constant-sized alphabet leads to a family of constant-rate PCPs with polynomially small query complexity. Such codes are constructed in the appendix to our paper for the first time for every message length, after they have been constructed for infinitely many message lengths by Stichtenoth.
Joint work with Swastik Kopparty and Or Meir, with an Appendix by Henning Stichtenoth. -
Friday, April 12, 2013: Speaker:
Yael Kalai
(MSR).
Topic:
Efficient interactive coding against adversarial noise.
In this talk we will study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic (i.e. drawn from some distribution) or adversarial (i.e., arbitrary subject only to a global bound on the number of errors).
We show how to *efficiently* simulate any interactive protocol in the presence of constant-rate adversarial noise, while incurring only a constant blow-up in the communication complexity (cc). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least 1-2^{-Omega(cc)}.
Joint work with Zvika Brakerski. -
Friday, April 5, 2013: Speaker:
Madhu Sudan
(MSR).
Topic:
Low-degree testing over small fields.
Let F_q be the field with q elements. We will consider the task of testing if a m-variate function f over F_q is close to being a polynomial of degree at most d. This task was originally considered in the setting where d was small compared to q and the resulting analyses found many applications in probabilistic checking of proofs. In this talk I will describe more recent results where we consider the setting where d > q. The natural test would be to pick a random subspace of dimension roughly d/q and see if the f restricted to this subspace is a polynomial of degree d. Earlier
analyses had shown that this natural test rejects functions that are, say 1%, far from being a degree d polynomial, with probability at least q^{-d/q}. In this talk I will describe our improved analyses which show that this same test rejects such functions with constant probability for constant q. Time permitting I might also mention some applications where the setting of d > q is useful.
Based on joint works with Arnab Bhattacharyya, Elad Haramaty, Swastik Kopparty, Noga Ron-Zewi, Grant Schoenebeck and David Zuckerman. -
Friday, March 22, 2013: Speaker:
Aaron Sidford and Jon Kelner
(MIT).
Topic:
Why Laplacian Linear Systems Are Hard, and an Easy Way to Solve Them Quickly.
Fast algorithms for solving Laplacian linear systems have found broad applications across both the theory and practice of computer science, playing a foundational role in scientific computing, machine learning, random processes, image processing, network analysis, and computational biology, among others. More recently, they have emerged as a powerful tool in the design of graph algorithms, enabling researchers to break longstanding algorithmic barriers for a wide range of fundamental graph problems.
Finding fast solvers for these systems has been an active area of research since the 1960s, culminating in algorithms that solve them in nearly-linear time. Until recently, the nearly-linear algorithms were quite complex, and developing them required multiple fundamental innovations in spectral and combinatorial graph theory, graph algorithms, and computational linear algebra, which were used to construct and analyze an intricate recursively preconditioned iterative solver.
In recent work with Orecchia and Zhu, we found a new, much simpler combinatorial algorithm that solves Laplacian systems in nearly-linear time while using very little of the machinery that previously appeared necessary for such an algorithm. After constructing a "nice" spanning tree of a graph associated with the system, it just repeatedly applies a simple (non-recursive) update rule implemented using a lightweight data structure. It is numerically stable, has the fastest running time under the standard unit-cost RAM model, and appears amenable to parallel/asynchronous settings.
In this talk, we will survey how the interplay between the combinatorial structure of a graph and the linear algebraic structure of its Laplacian leads to fast solvers. We will discuss the key challenges that a Laplacian solver faces, give a brief overview of previous algorithms, and then describe our algorithm in detail. -
Friday, March 15, 2013: Speaker:
Boaz Barak
(MSR).
Topic:
The Sum of Squares Algorithm: applications, connections, and questions.
The Sum of Squares (SoS) algorithm (Parrilo '00, Lasserre '01) is a semidefinite programming relaxation that has been proposed in various communities under varying names.
I'll give a survey of the algorithm from a theoretical computer science perspective, particularly focusing on recent connections to the unique games conjecture and the quantum separability problem.
Some of the topics I hope to cover are:
1) Definition of the primal and dual SoS program in terms of pseudoexpectations and the Positivstellensatz proof system.
2) Lower bounds for random 3XOR
3) A generic scheme for rounding SoS, with applications to the quantum separability problem.
4) Certifying hypercontractivity of low degree polynomials using SoS, with applications to the unique games conjecture.
-
Friday, December 14, 2012: Speaker:
Anindya De
(Berkeley).
Topic:
Reconstruction of halfspaces from their average satisfying assignment.
In the 2nd FOCS conference (1961), C.K. Chow gave a surprising characterization of linear threshold functions : for any distribution over {-1,1}^n, they are uniquely identified, within the space of all [-1,1]-valued functions, by their correlations with the n input variables and the constant function. The proof is easy but non-constructive.
Over the years, the constructive version of this problem (over the uniform distribution) has attracted a lot of attention in computational complexity and learning theory. In social choice theory, the constructive version of this problem (even over distributions other than the uniform) is of interest and corresponds to the problem of constructing weighted voting games for a given set of power indices.
In the past few years, significant progress has been made in this area by drawing on techniques from Fourier analysis, anti-concentration, linear algebra and learning theory. In this talk, we will discuss these results for the uniform distribution and for a nonuniform distribution motivated by social choice theory. The emphasis will be on explaining the key ideas and technical ingredients of these works.
The talk is based on joint works with Ilias Diakonikolas, Vitaly Feldman and Rocco Servedio.
-
Friday, October 12, 2012: Speaker:
Michael Forbes
(MIT).
Topic:
Polynomial Identity Testing of Non-Commutative Formulas.
Polynomial Identity Testing (PIT) is the problem of identifying whether an algebraic circuit computes the identically zero polynomial. It is well-known that this problem can be solved by evaluating the circuit at random evaluation points. Recently, there has been much interest in solving this problem deterministically, because it has close connections with circuit lower bounds. Many of the recent works construct deterministic algorithms for PIT for specific circuit classes, as the general case seems difficult.
In this work, we will be focusing on non-commutative polynomials, which are polynomials with formally non-commuting variables, and will focus on such polynomials computed by formulas. We construct a quasi-polynomially sized set of evaluation points, over the ring of matrices, such that a polynomial computed by a small non-commutative formula is zero iff all of its evaluations on these points are zero. This is the first deterministic "black-box" PIT algorithm for non-commutative formulas in sub-exponential time.
Interestingly, this result uses techniques than can be seen as algebraic analogues of the Nisan92 and Impagliazzo-Nisan-Wigderson94 pseudorandom generators for space-bounded computation.
No background on polynomial identity testing will be assumed.
Joint work with Amir Shpilka. -
Friday, September 28, 2012: Speaker:
Alex Lubotzky
(Hebrew University).
Topic:
Ramanujan graphs.
Ramanujan graphs are finite graphs with optimal bounds on their spectrum. They give outstanding expanders and play an important role in combinatorics and computer science. We will explain slowly the classical way to construct Ramanujan graphs and their properties. If time permit, we will explain briefly "Ramanujan complexes" which are higher dimensional analogues. An effort will be made to make it accessible to a wide audience.
-
Friday, September 21, 2012: Speaker:
Amir Shpilka
(Technion).
Topic:
Survey of Fast Matrix Multiplication.
In this talk we'll discuss algorithms for Matrix Multiplication starting from Strassen's first algorithm going through Border rank and Schinhage's tau-theorem and ending with Strassen's laser method and Coppersmith-Winograd algorithm (as well as the recent improvement by Vassilevska Williams). Time permitting we will also discuss the approach of Cohn-Umans. If we have time left after all of this then we'll discuss the connection between a couple of conjectures that may lead to (almost) optimal algorithm for MM and conjectures related to sunflowers from extremal combinatorics.
Most (if not all) of the talk is based on works of others. The last part is based on a joint work With Noga Alon and Chris Umans. -
Friday, September 14, 2012: Speaker:
Benny Sudakov
(UCLA).
Topic:
Dense graphs decomposable into large induced matchings and their applications.
How many edges can n-vertex graph have if it can be decomposed into
pairwise disjoint induced matchings of size t=t(n)? This old question of Rusza and Szemeredi, surprisingly, has applications to many areas including number theory, complexity and information theory. In this talk we discuss constructions of such graphs and these applications.
(Based on the work with Alon and Moitra) -
Friday, August 31, 2012: Speaker:
Allison Lewko
(MSR).
Topic:
A survey of pairing-based cryptography.
We will motivate the use of bilinear maps (aka pairings) in cryptography by discussing their early applications. We will begin with their initial application as a tool for breaking schemes and then focus on their very different role in moving beyond the functionality of public key encryption. We will also compare the benefits and limitations of pairings with other cryptographic tools, such as lattices. This talk is intended to be accessible to non-cryptographers (and may be boring to experts).
-
Friday, August 24, 2012: Speaker:
Dana Moshkovitz
(MIT).
Topic:
Projection Games.
For the last couple of decades, projection PCPs have been the basis of most optimal NP-hardness of approximation results (PCP=Probabilistically Checkable Proofs).
In this talk I will describe what projection games are, how they are used to derive hardness results, how unique games relate to them, and what are the main constructions and open questions. I'll also discuss a few recent results related to projection PCPs. -
Friday, August 10, 2012: Speaker:
Nati Linial
(Hebrew University).
Topic:
High-dimensional combinatorics.
One of the main themes in my work is the search for geometric perspectives of combinatorics. It should not be too surprising, then, that I started considering notions of dimension in this context. As it turns out, many key objects which are classically studied in combinatorics are, in a very well-defined sense, one-dimensional. This suggests that we consider their higher-dimensional counterparts. We have been pursuing this line of research for the last several years. In this seminar I will present some of our findings. I will start by asking about the right notion of a permutation in higher dimensions, and will offer two separate answers and some of the concrete results that we have in this domain.
Graphs are, of course, one-dimensional simplicial complexes and the theory of random graphs occupies a central role in modern combinatorics and in numerous application areas. One of our main objectives is to develop a theory of random higher-dimensional simplicial complexes.
If time permits, I will also speak about high-dimensional counterparts of trees and of regular graphs. I may also mention connections with coding theory.
The talks are based on joint work with Roy Meshulam, Tomasz Luczak, Mishael Rosenthal, Lior Aronshtam, Zur Luria and Avraham Morgenstern. -
Friday, August 3, 2012: Speaker:
David Gamarnik
(MIT).
Topic:
Combinatorial optimization on random graphs. Survey..
We will survey results on constraint satisfaction problems on sparse random graphs and related models such as random K-SAT model. We will discuss structural results related to the limiting values of the constraint satisfaction problems when the size of the graph diverges to infinity, the known algorithms to solve these problems, as well as algorithmic challenges of finding solutions to these problems. Fascinating connections with statistical physics concepts, including the theory of spin glasses, phase transition and the so-called clustering phenomena will permeate the discussion.
-
Friday, July 13, 2012: Speaker:
Robert Krauthgamer
(Weizmann Institute).
Topic:
Graph compression for distances and for cuts.
No abstract.
-
Friday, July 6, 2012: Speaker:
Zvika Brakerski
(Stanford).
Topic:
Homomorphic Encryption.
No abstract.
-
Friday, June 22, 2012: Speaker:
Sreeram Kannan
(University of Illinois at Urbana-Champaign).
Topic:
Cuts and Flows in Polymatroidal Networks .
"Flows" move commodity / information across a network
whereas "cuts" are natural upper bounds on flows. In standard networks,
there are several multi-commodity settings where the gap between
max-flow and min-cut is small (poly-logarithmic). A generalization of
standard network is the *polymatroidal network* model (introduced by
Lawler & Martel and Hassin), where, instead of having individual
capacity constraints on each edge, there is a joint submodular
capacity constraint on the edges incident to a node. This model
turns to be even more significant in modern times due to the fact
that information flow and capacity in wireless networks is best
modeled by such polymatroidal multicommodity flows.
In this talk, I will show that existing results for the
multi-commodity setting in standard networks generalize to
polymatroidal networks. The techniques involved include low-distortion
line-embeddings and convex (Lovasz) extensions of submodular
functions. If there is interest I could also talk about the connection
between polymatroidal flows and wireless capacity.
No specific background will be assumed for the talk.
Joint work with Chandra Chekuri, Adnan Raja and Pramod Viswanath.
-
Friday, June 15, 2012: Speaker:
Brendan Lucier
(MSR New England).
Topic:
Combinatorial Walrasian Equilibrium.
In a combinatorial auction, a seller has m items to sell and n buyers willing to purchase, where each buyer has a value for each subset of the items. In general such auctions are complex, with specification of bids and determination of optimal allocations having exponential complexity in n and/or m. In some special cases, however, there exist ways to set prices on the items for sale so that the market clears when each buyer takes his utility-maximizing set. Such an outcome is known as a Walrasian Equilibrium (WE).
In this talk I will introduce the concept and tell you about some classical settings (the "gross substitutes" condition) under which WE are known to exist. I will then speak about some new results (joint with Michal Feldman and Nick Gravin) where we relax the concept of WE to the combinatorial setting, leading to the notion of a Combinatorial Walrasian Equilibrium (CWE). I will describe some of the properties of this notion and some of the algorithmic problems that arise and solutions we provide. -
Friday, June 1, 2012: Speaker:
Omer Tamuz
(Weizmann Institute).
Topic:
Learning and the topology of social networks: a topological approach.
We study a standard model of Bayesian agents in a social network who learn a binary state of the world S, from initial signals, by repeatedly observing each other’s best guesses. In particular, we are interested in the question of asymptotic learning: for which large networks do the agents learn S with high probability? We give a geometrical condition for learning which is derived from the notion of compactness in the Benjamini-Schramm topology of rooted graphs.
No prior knowledge required.
Joint work with Elchanan Mossel and Allan Sly -
Friday, May 11, 2012: Speaker:
Nisheeth Vishnoi
(MSR India).
Topic:
Solve, Walk, Cut.
In this talk I will address the following question: Can one speed-up simulation of random walks given the ability to solve a system of linear equations quickly?
The answer to this will be revealed via an intricate story which stitches together methods from spectral graph theory, numerical linear algebra and approximation theory.
As an application, I will highlight the first near-linear time approximation algorithm, that achieves the Cheeger bound, to cut a graph into two roughly equal parts while minimizing the conductance of the partition.
An overview of fast methods to solve linear systems of equations will be provided. -
Friday, May 4, 2012: Speaker:
Sham Kakade
(MSR New England).
Topic:
Two SVDs Suffice: Spectral Decompositions for Probabilistic Topic Modeling and Latent Dirichlet Allocation.
The topic modeling problem can be seen as a generalization of the clustering problem, where each document has a mixture of active topics (as opposed to just being about one topic) and that each active topic determines the occurrence of words in the document. Often, probabilistic models for generating the words in a document, such as the popular latent Dirichlet allocation model, possess a rich representational power by allowing the words in each document to be generated from more than one topic. This increased representational power comes at the cost of a more challenging unsupervised estimation problem of how to estimate the topic probability vectors (the distributions over words for each topic), when only the words are observed and the corresponding topics are hidden.
We provide a simple and efficient unsupervised learning procedure that is guaranteed to recover the parameters for a wide class of mixture models, including latent Dirichlet allocation (LDA). For LDA, the procedure correctly recovers both the topic probability vectors and the prior over the topics, using only trigram statistics (i.e. third order moments, which may be estimated with documents that contain just three words). The method, termed Excess Correlation Analysis (ECA), is based on a spectral decomposition of low order moments (third and fourth order) via two singular value decompositions (SVDs). Moreover, the algorithm is scalable since the SVD operations are carried out on k by k matrices, where k is the number of latent factors (e.g. the number of topics), rather than in d-dimensional observed space (typically d >> k).
-
Friday, April 20, 2012: Speaker:
Ivan Corwin
(MSR New England).
Topic:
A Few Ideas from Random Matrix Theory and Related Fields .
I will present a few fundamental ideas and methods from random matrix theory as well as related fields such as growth processes and particle systems. In particular, I will explain the moment method as well as the role of orthogonal polynomials and symmetric functions in the analysis of these systems. No background will be necessary and some detailed examples will be given.
-
Friday, April 13, 2012: Speaker:
Eric Price
(MIT).
Topic:
Nearly Optimal Sparse Fourier Transform.
We consider the problem of computing the k-sparse approximation to the
discrete Fourier transform of an n-dimensional signal. We show:
- An O(k log n)-time randomized algorithm for the case where the input signal
has at most k non-zero Fourier coefficients, and
- An O(k log n log(n/k))-time randomized algorithm for general input signals.
Both algorithms achieve o(n log n) time, and thus improve over the
Fast Fourier Transform, for any k = o(n). They are the first known
algorithms that satisfy this property. Also, if one assumes that the
Fast Fourier Transform is optimal, the algorithm for the exactly
k-sparse case is optimal for any k = n^{Omega(1)}.
We complement our algorithmic results by showing that any algorithm
for computing the sparse Fourier transform of a general signal must
use at least Omega(k log(n/k) / log log n) signal samples, even if it is
allowed to perform adaptive sampling.
Joint work with Haitham Hassanieh, Piotr Indyk, and Dina Katabi.
Appearing at STOC, and available at http://arxiv.org/abs/1201.2501 -
Friday, April 6, 2012: Speaker:
Thomas Vidick
(MIT).
Topic:
Multi-prover Interactive Proofs with Entangled Provers.
I will talk about a new result on the hardness of multi-prover
interactive proofs with entangled provers: we prove that MIP* contains
NEXP (joint work with Tsuyoshi Ito). This answers a question open
since entangled-prover interactive proofs were first introduced: it
was not know if the use of entanglement could lead to a systematic
collapse of MIP = NEXP [BFL'91]. The result can also be seen as a step
towards a form of quantum PCP theorem.
The goal will be to give an idea of why we study entangled-prover
interactive proofs (or, entangled games), why they are challenging,
and why their analysis may teach us something even about the classical
setting. I will discuss a quantum analogue of BLR's linearity test,
and extend it to prove the soundness of Babai, Fortnow and Lund's
multilinearity test in the presence of entangled provers. -
Friday, March 30, 2012: Speaker:
Shubhangi Saraf
(IAS).
Topic:
Tight Lower Bounds for 2-query LCCs Over Finite Fields .
A locally correctable code (LCC) is an error correcting code mapping d
symbols to n symbols, such that for every codeword c and every received word r
that is delta-close to c, we can recover any coordinate of c (with high
probability) by only making a few queries to r. LCCs are a stronger form of
Locally Decodable Codes (LDCs) which have received a lot of attention recently
due to their many applications and surprising constructions.
A linear LCC over a finite field F_p is a code where the codewords form a linear
subspace of F_p^n. The code whose codewords are the truth tables of linear
functions evaluated on all of F_p^n is an example of a linear LCC that uses
only 2 queries. In this talk we will show that in some sense this is the only
example of a linear 2 query LCC over F_p.
Specifically, we prove a lower bound of the form p^(delta*d) on the length n of
linear 2-query LCCs over F_p, that encode messages of length d. This also gives
a separation between 2 query LCCs and 2 query LDCs over finite fields of prime
order.
Constructions of such 2 query LCCs are intimately related to special
configurations of lines and points in F_p^n with interesting incidence
properties. The problem of ruling out constructions of small 2-query LCCs boils
down to showing that certain configurations of points and lines do not exist.
Our proof makes use of tools from additive combinatorics. We also obtain, as
corollaries of our main theorem, new results in incidence geometry over finite
fields, such as an improvement to the Sylvester-Gallai theorem over finite
fields and a new analog of Beck's theorem over finite fields.
This is joint work with Arnab Bhattacharyya, Zeev Dvir and Amir Shpilka. -
Friday, March 23, 2012: Speaker:
Christian Sommer
(MIT).
Topic:
Exact and Approximate Shortest-Path Queries.
We discuss the problem of efficiently computing a shortest path between two nodes of a network --- a problem with numerous applications. The shortest-path query problem in particular occurs in transportation (route planning and navigation or also logistics and traffic simulations), in packet routing, in social networks, and in many other scenarios.
Strategies for computing answers to shortest-path queries may involve the use of pre-computed data structures (also called distance oracles) in order to improve the query time. We survey the main ideas and techniques of shortest-path query methods --- some of them are used in state-of-the-art route planning software such as Microsoft Bing Maps. -
Friday, March 2, 2012: Speaker:
Swastik Kopparty
(Rutgers University).
Topic:
The Complexity of Powering in Finite Fields.
I will talk about lower bounds for computing the map x -> x^k over the finite field F_{2^n}, by bounded depth arithmetic circuits over F_2 (aka AC^0(parity)). It involves studying things like the correlation of the cubic residue character with low degree polynomials over F_2.
-
Friday, February 17, 2012: Speaker:
Guy Kindler
(Hebrew University).
Topic:
Gaussian Noise Sensitivity and Fourier Tails.
I will show some new work with Ryan O’Donnell where we manage to prove classical results such as Borel’s isoperimetric inequality with very short proofs using only high school trigonometric equalities. Then we combine this with the invariance principle (which we will prove as well) to get results such as Majority is Stablest and Bourgain’s Junta Theorem. No background in Harmonic analysis etc. is assumed.
-
Friday, November 11, 2011: Speaker:
Ran Raz
(Weizmann).
Topic:
The Surprise Examination Paradox and the Second Incompleteness Theorem.
We give a new proof for Godel's second incompleteness theorem, based
on Kolmogorov complexity, Chaitin's incompleteness theorem, and an
argument that resembles the surprise examination paradox.
We then go the other way around and suggest that the second
incompleteness theorem gives a possible resolution of the surprise
examination paradox. Roughly speaking, we argue that the flaw in the
derivation of the paradox is that it contains a hidden assumption that
one can prove the consistency of the mathematical theory in which the
derivation is done; which is impossible by the second incompleteness
theorem.
I will start the talk by a short informal introduction to Godel's
first and second incompleteness theorems.
No knowledge in logic will be assumed.
Joint work with Shira Kritchman -
Friday, November 4, 2011: Speaker:
Eli Ben-Sasson
(Technion/MSR New England).
Topic:
An Additive Combinatorics Approach to the Study of the Log-rank Conjecture in Communication Complexity.
For a {0,1}-valued matrix M let CC(M) denote the deterministic communication complexity of the boolean function associated with M. The log-rank conjecture of Lovasz and Saks [FOCS 1988]
states that CC(M) <= log^c(rank(M)) for some absolute constant c where rank(M) denotes the rank of M over the field of real numbers.
We show that CC(M) <= c rank(M)/ log rank(M) for some absolute constant c, assuming a well-known conjecture from additive combinatorics, known as the Polynomial Freiman-Ruzsa (PFR) conjecture.
Our proof is based on the study of the "approximate duality conjecture" which was recently suggested by Ben-Sasson and Zewi [STOC 2011] and studied there in connection to the PFR conjecture. First we improve the bounds on approximate duality assuming the PFR conjecture. Then we use the approximate duality conjecture (with improved bounds) to get the aforementioned upper bound on the communication complexity of low-rank martices, and this part uses the methodology suggested by Nisan and Wigderson [Combinatorica 1995].
Joint work with Shachar Lovett and Noga Zewi -
Friday, October 28, 2011: Speaker:
Irit Dinur
(Weizmann/MSR New England).
Topic:
Hardness of Approximation for Vertex-cover and Approximate Coloring.
I'll talk about an old (2001) result on the hardness of approximating vertex cover, and how we recycled this result to get a newer (FOCS '10) paper about the hardness of approximate coloring.
Based on joint work with Subhash Khot, Will Perkins & Muli Safra -
Friday, October 21, 2011: Speaker:
Eric Blais
(CMU).
Topic:
Testing k-linear Functions .
Blum, Luby, and Rubinfeld (1993) showed that it is possible to test whether a boolean function is linear -- or, equivalently, whether it returns the parity of some of its inputs -- with a constant number of queries. A closely related problem is testing whether a function returns the parity of *exactly* k of its input bits. We call this problem the k-linearity testing problem.
The k-linearity testing problem plays a central role in property testing on boolean functions. In particular, lower bounds on the query complexity of this problem imply strong lower bounds for testing juntas, for testing small-depth decision trees, for function isomorphism testing, and for a number of other property testing problems. Yet, despite much attention, the query complexity of the k-linearity testing problem has only been settled very recently.
In this talk, we will present the near-optimal analysis of the query complexity for testing k-linearity. We will discuss the connection to other property testing problems. We will also show the connections between testing k-linearity and other areas of mathematics, including high-dimensional geometry, intersecting families, parity decision trees, and communication complexity.
This talk will cover joint work with Noga Alon, Joshua Brody, Daniel Kane, Kevin Matulef, Ryan O'Donnell, Amit Weinstein, and Yuichi Yoshida. -
Friday, October 14, 2011: Speaker:
Michael Forbes
(MIT).
Topic:
Identity Testing of Tensors, Low Rank Recovery and Compressed Sensing.
A matrix A naturally defines a quadratic form x^tAy. If A is of rank <=r, then the rank<=r decomposition of A corresponds naturally to a size ~2nr circuit for computing the quadratic form. It is clear how to perform "white box" polynomial identity testing for such circuits, and the motivating question for this work is to explore black-box identity testing. The probabilistic method shows that there is a size ~2nr hitting set for such polynomials. In this work we match this bound (over large fields), which is optimal up to constant factors.
Further, we explore how A can be reconstructed from the evaluations of the quadratic form. Similar probabilistic constructions show that there exist ~4nr evaluations which determine any such matrix A. Again, we match this bound (over large fields) with an explicit construction, and furthermore give a polynomial-time algorithm to reconstruct A from such evaluations. More generally, we show an efficient reduction from (exact) low-rank matrix reconstruction to (exact) sparse recovery. This reduction is novel in the compressed-sensing realm as it is field independent and unrelated to convex optimization.
Finally, we also derive some similar, but weaker, results for higher-order tensors.
Joint work with Amir Shpilka. -
Friday, October 7, 2011: Speaker:
Gregory Valiant
(UC Berkeley).
Topic:
The Surprising Power of Linear Estimators .
For a broad class of practically relevant distribution properties, which includes entropy and support size, nearly all of the proposed estimators have an especially simple form. Given a set of independent samples from a discrete distribution, these estimators tally the vector of summary statistics - the number of domain elements seen once, twice, etc. in the sample - and output the dot product between these summary statistics, and a fixed vector of coefficients. We term such estimators linear. This historical proclivity towards linear estimators is slightly perplexing, since, despite many efforts over nearly 60 years, all proposed such estimators have significantly suboptimal convergence, compared to our recent bounds. In joint work with Paul Valiant, in some sense vindicating this historical insistence on linear estimators, we show that for any property in this broad class, there exists a near-optimal linear estimator. Our proof exposes---and relies on---a fundamental relationship between `good' lower bound instances, and 'good' linear estimators.
-
Friday, September 30, 2011: Speaker:
David Steurer
(MSR New England).
Topic:
Making the Long Code Shorter with Applications to the Unique Games Conjecture.
The long code is a central tool in hardness of approximation, especially in questions related to the Unique Games Conjecture. We construct a new code that is exponentially more efficient, but can still be used in many of these applications. Using the new code we obtain exponential improvements over several previous results on integrality gaps and alphabet reductions.
Underlying our results is a new connection between locally testable codes and small-set expanders with many large eigenvalues.
As a prequel to this talk, I will also report on the status of the Unique Games Conjecture in general.
Based on joint work with Boaz Barak, Parikshit Gopalan, Johan Hastad, Raghu Meka, and Prasad Raghavendra. -
Friday, September 23, 2011: Speaker:
Aleksander Mądry
(MSR New England).
Topic:
Online Algorithms and the k-server Problem.
I will be talking about a recent result that establishes a polylogarithmic-competitive algorithm for the k-server problem (this is joint work with Nikhil Bansal, Niv Buchbinder, and Seffi Naor). I plan to start with a brief presentation of the history and background of the problem and then will describe the sketch of the result, as well as, cover in more detail some of its parts that I believe to be interesting. No background on the k-server problem (or online algorithms, in general) is assumed.
-
Friday, September 16, 2011: Speaker:
Shachar Lovett
(IAS).
Topic:
T-designs and k-wise Independent Permutations .
I will speak on a recent work with Greg Kuperberg and Ron Peled. We show existence of small families of t-designs and k-wise independent permutations using local limit theorems. For t-designs we are able to handle almost all setting of parameters, while previous results (e.g. Teirlinck thm) were able to show existence only for a few settings; and for k-wise independent permutations we can handle any k, while previous results could only handle k=1,2,3).
-
Friday, September 9, 2011: Speaker:
László Lovász
(Eötvös Loránd University).
Topic:
Theory of Graph Limits.
(no abstract)
-
Friday, September 2, 2011: Speaker:
Avi Wigderson
(IAS).
Topic:
Determinant and Permanent .
I'll survey a few of many old and new results about these two similar, yet so different polynomials.
The main purpose is to ask a few of many, old and new open problems about them, and prove few of many old and new results, focusing on arithmetic complexity. No particular knowledge is required.
Past Organizers: Aleksander Mądry, David Steurer.
Website designed by Aleksander Mądry, and currently maintained by Madhu Sudan.