Every Wednesday,
March-April 2005
Amir Yehudayoff |
Derandomizing Polynomial Identity Tests Means Proving
Circuit Lower Bounds |
|
Gidi Amir |
One-Dimensional Diffusion Limited Aggregation |
|
|
||
|
|
(Papadimitriou’s lecture) |
Iftach Haitner |
Construction
of a Pseudo-Random Generator From Any One-Way Function |
|
|
Martingales and Collective Coin Flipping |
May 2005
|
||
|
|
(Yom Ha’zikaron) |
Danny Hefetz |
On Avoider-Enforcer Games |
|
|
Lattices in Computer
Science and |
June 2005
|
||
|
|
(OBERWOLFACH) |
|
Expander Flows,
Geometric Embeddings, and Graph Partitionings |
|
|
(cont’d) |
|
Danny Harnik |
July 2005
|
Bourgain's Two-Source
Extractor(s) |
|
|
|
|
|
||
|
(cont’d) |
August 2005
|
One-Way
Functions are Essential for Complexity Based Cryptography |
|
Elchanan Mossel |
On
Sensitivity and Chaos |
Derandomizing Polynomial Identity Tests Means Proving
Circuit Lower Bounds |
|
Speaker: |
Amir Yehudayoff |
(Based on a paper by
Valentine Kabanets and Russell Impagliazzo,
2003).
Arithmetic Circuit Identity
Testing is a version of the Polynomial Identity Testing Problem: Given an
arithmetic circuit C computing a polynomial p, decide if pº0.
In their paper they show
that "derandomization of Polynomial Identity Testing
is, essentially, equivalent to proving circuit lower bounds for NEXP." We
will not go over all the paper, but we will, essentially, go over the quote
above, i.e.,
1. Derandomization
of Polynomial Identity Testing proves some sort of a lower bound.
2. Proving some lower bound
is (actually) a derandomization of Polynomial
Identity Testing.
Title: |
One-Dimensional
Diffusion Limited Aggregation |
Speaker: |
Gidi Amir |
In a paper from 1981 Sanders
and Whiten introduced the (2-Dim) DLA process --- a stochastic growth model, in
which a "Discrete fractal" subset of Z2 is grown from the
origin in an inductive process, where in each step a particle, starting at
"infinity" makes a simple random walk on Z2 until it
becomes a neighbor of the aggregate, then glues to it. The process turned out
to be extremely difficult to analyze, and in spite of attracting a lot of
attention, very little is known on the structure of the n-th
step set or the limit set.
In this lecture we will
define a family of 1-dimensional analogs of the standard DLA process - the
1-DLA process, which are more prone to analysis. The process is parameterized
by an integer random variable X, which we will call the step distribution. The
aggregates are subsets of Z built inductively by starting with the set A0
= {0} and then at each step releasing a particle "near infinity" ,
letting it make a random walk with jump distribution X until it hits An.
If the particle first hits the set An by making a jump from xÏAn to aÎAn then we set An+1 = AnÈ{x}. (Precise definitions will be given). We will
analyze this process for various choices of the step distribution, getting
bounds for the growth of the diameter, and show a transient walk for whom the
limit aggregate is the whole of Z despite the diameter growing super
exponentially.
No knowledge of random walks
is assumed for the lecture, I will explain the facts that I use for the proofs.
Note that DLA creates great
pictures to hang on the wall, and is even considered a practical approximation
for various areas in physics.
Title: |
|
Speaker: |
|
Multiple source extractors with entropy requirement d are deterministic functions that take several inputs
of n bits each, such that if each input is drawn from a distribution with
min-entropy dn, the output is close to uniform. We will discuss the
construction of Boaz Barak, Russell Impagliazzo, and Avi Wigderson, which is the first such extractor with entropy
requirement less than 1/2 and a constant number of inputs. The results are
based on several results from additive number theory, and their use may be of
separate interest.
Title: |
Construction of a
Pseudo-Random Generator From Any One-Way Function |
Speaker: |
Iftach Haitner |
Pseudorandom generator, a notion
first introduced by \cite{FOCS82*112} and stated in its current, equivalent,
form by \cite{MR780384}, is one of the basic primitives in the study of the
interaction between random and deterministic algorithms. Pseudorandom
generators also have many cryptographic applications (for example \cite{Naor89,
FOCS84*464, Luby:1988:HCP}). Informally, a
pseudorandom generator is a polynomial-time computable function g that
stretches a short random string x into a long string g(x) that
“looks" random to any efficient (i.e., polynomial-time) algorithm.
Hence, there is no efficient algorithm that can distinguish between g(x) and a
true random string of length |g(x)| with more than negligible probability. Thus
we can use g to convert a small amount of randomness into larger number of
effectively random bits.
The first construction of
pseudorandom generator was given by \cite{FOCS82*112}, their construction based
on
a particular one-way function was later generalized by \cite{MR780384} into a
construction of pseudorandom generator based on any one-way permutation. The
question of constructing a pseudorandom generator using any one-way function
remained open until Håstad et al.\ \cite{Hill} who,
following \cite{lncs403*146, STOC::ImpagliazzoLL1989}, presented such a construction.
In the talk I give a high
level overview of the Håstad et al construction.
Title: |
Martingales and Collective Coin
Flipping |
Speaker: |
|
Based
on an unpublished manuscript by Richard Cleve and Russel
Impagliazzo (1993)
The main result concerns martingale sequences that correspond to a
situation where information about an event is gradually released; We will show
that for any martingale X0,...,Xn
where X0=1/2 and Xn is either 0
or 1, with constant probability there exists i such
that |Xi-Xi-1|=Ω(1/√n). This is related to
(but not a direct consequence of) the "martingale tail inequality" of
Azuma (1967) and Hoeffding (1963).
This result has some interesting applications. In particular, we will show that
for any r-round two-party collective coin flipping protocol where
bit-commitment is provided as a primitive, there must be one player that can
bias the outcome of the other player by Ω(1/√r).
This improves a previous lower bound (by Cleve) of Ω(1/r),
and is tight for this model.
The talk will not assume knowledge of either martingales or coin flipping.
Title: |
|
Speaker: |
|
(Based on a paper by Andrej Bogdanov)
Let d and m be some integers, and F be some finite field.
In this work, Bogdanov shows how to construct a small subset of Fm (that can be efficiently sampled from)
Such that, for any polynomial p:Fm-> F of total degree d, when an input is chosen uniformly from this small set, the output distribution of p is close to the output distribution of p on a uniform sample in all of Fm.
I will sketch the construction and its proof and present the mathematical tools used.
Title: |
On Avoider-Enforcer Games |
Speaker: |
Dan Hefetz ( |
Let p and q be positive integers and let H be a hypergraph.
In a
(p,q,H) Avoider-Enforcer game two players, called
Avoider and Enforcer,
take turns selecting previously unclaimed vertices of H. Avoider
selects p vertices per move and Enforcer selects q vertices per move.
Avoider loses if he claims all the vertices of some hyperedge
of H,
otherwise Enforcer loses. We prove a sufficient condition for Avoider
to win the (p,q,H) game, and
use it to analyze several classic games -
connectivity, hamiltonicity and perfect matching.
Some of our results are quite surprising as they differ from those
obtained for the analogous Maker-Breaker games.
Joint work with Michael Krivelevich and Tibor Szabo
Title: |
Lattices in Computer Science and A Sieve Algorithm for the Shortest Lattice Vector
Problem
|
Speaker: |
|
The talk has two goals:
1. Introduce an area
Lattices: definitions, algorithmic problems, known results and connections to complexity and cryptography.
2. Present a nice algorithm
The randomized sieve algorithm of Ajtai, Kumar and Sivakumar (STOC01),
finding a shortest vector in a lattice in time 2O(n).
[this
problem is NP-hard and prior to this paper there were algorithms with larger
exponents in the running-time].
Some of the talk will be based on an excellent course
given by
Title: |
|
Speaker: |
|
Initially a car is placed with probability p at each site of the two-dimensional integer lattice.
Each car is equally likely to be East-facing or North-facing, and different sites receive independent assignments.
At odd time steps, each North-facing car moves one unit North if there is a vacant site for it to move into.
At even time steps, East-facing cars move East in the same way.
This is called the BML (Biham-Middleton-Levine) Traffic model.
It is long conjectured (and confirmed by simulations) that for large enough p,
all cars are eventually jammed, while for p below some critical pc, traffic flows freely.
The first part was recently proved by Angel, Holroyd and Martin.
I am going to present their paper and some background.
No prior knowledge is necessary.
Title: |
Expander Flows, Geometric Embeddings, and Graph Partitionings |
Speaker: |
|
We present Arora, Rao and Vazirani's paper with the same title.
One main result is a √logn approximation for the edge-expansion of a graph (exact computation is NP-hard).
The techniques used are geometric embeddings in the spirit of Goemans-Williamson, and
concentration of measure.
Title: |
|
Speaker: |
Danny Harnik |
We define pseudorandom generators for
Based on a paper by Impagliazzo, Nisan and Wigderson from 1994.
Title: |
Bourgain's Two-Source Extractor(s) |
Speaker: |
|
A two-source extractor is a function E(x,y) that takes as input two samples from two independent weak sources, and outputs an almost uniform bit (or bits).
Very recently Bourgain gave a new construction of a two-source extractor that works when the min-entropy rate of its two inputs is larger than ~0.49.
All previous constructions require at least one of the inputs to have min-entropy rate larger than some constant > 0.5.
Title: |
|
Speaker: |
|
I will give some additional details on undirected connectivity in deterministic log-space (e.g., universal traversal and exploration sequences). I will also discuss some attempts "towards" proving RL=L (or even BPL=L). These attempts are joint with Luca Trevisan and Salil Vadhan.
Title: |
One-Way Functions are Essential for Complexity
Based Cryptography |
Speaker: |
|
Under the assumption that (a certain kind of) one-way function exists, it is known that a wide variety of cryptographic objects and protocols can be constructed: from pseudo-random generators to zero-knowledge proofs for any efficiently provable statement (i.e. for all of IP).
A natural question is whether, in fact, the assumption that one-way functions exist is simply too strong. In their paper from 1989 Impagliazzo and Luby tackled this question and formalized the intuition that one-way functions are not only a “sufficient'' tool for building good cryptographic primitives, but are also a “necessary'' tool for many central cryptographic tasks.
In this talk I hope to survey some results from Impaggliazo and Luby's original paper and a more complex result of Ostrovsky and Wigderson (from 1993), who showed that one-way functions are essential for non-trivial zero-knowledge.
Title: |
On Sensitivity and Chaos |
Speaker: |
Elchanan Mossel
(UC Berkeley) |
I will discuss some (very) recent results showing how techniques from the theory of Gaussian Hilbert spaces can be used to solve a number of open problems in discrete Fourier analysis.
Joint work Ryan O'Donnell and Krzysztof Oleszkiewicz.