\documentclass[10pt]{article}
\newtheorem{define}{Definition}
%\newcommand{\Z}{{\mathbb{Z}}}
\usepackage{psfig}
\oddsidemargin=0.15in
\evensidemargin=0.15in
\topmargin=-.5in
\textheight=9in
\textwidth=6.25in
\newcommand{\BI}{\begin{itemize}}
\newcommand{\EI}{\end{itemize}}
\newcommand{\BE}{\begin{enumerate}}
\newcommand{\EE}{\end{enumerate}}
\def\NP{{\bf NP} }
\def\P{{\bf P} }
\begin{document}
\input{../preamble.tex}
%{lecture number}{lecture date}{Madhu Sudan}{Rafael Pass}
\lecture{1}{February 2, 2005}{Madhu Sudan}{Rafael Pass}
\section{Administrivia}
The course instructor is Madhu Sudan ({\sf madhu@mit.edu}), and the TA
is Sergey Yekhanin \newline ({\sf yekhanin@theory.csail.mit.edu}).
Send an email
to {\sf madhu@mit.edu} to be added to the course mailing list.
\paragraph{Grading}
The course will be graded based on 4 problem sets. Each student is
also expected to take scribe notes at least once.
The first problem set is out now and is due in 2 weeks.
\section{Goals of Complexity}
The goals of Complexity can be summarized as follows:
\BI
\item Identify important computational problems,
\item Analyze what kind of resources they need to be solved.
\EI
Consider, for instance, the following problems: SAT, Sorting, Matchings in graphs, Multiplication, Factoring, Turing machine halting.
Natural questions that arise are:
\BI
\item \emph{What are the best solutions to these problems?}
\item \emph{Are they the best possible?}
\EI
Secondly, we would like to find out the ``relation'' between the different problems, by ``drawing arrows between them''. Towards this goal we first need to
define the notion of \emph{efficiency}. This is done by:
\BE
\item Identifying a resource of interest,
\item Placing coarse limits on it.
\EE
Having defined a notion of efficiency we can now place all the the
computational problem of interest as points in a plane and start drawing
arrows between them, based on how they relate to each other.
More specifically we draw an arrow from A to B (A $\rightarrow$ B) if the following condition holds: \emph{if B can be solved efficiently, then so can A}.
\subsection{Computational ``Phenomena'' - Complexity Classes}
Note that if we use a ``reasonable'' definition of efficiency then the
arrows are transitive. We might thus be able to identify, say 2,
different sets of problems such that all problems in the sets are connected
to each other, but arrows between the sets only go in one direction.
Such a computational ``phenomena'' gives rise to the notion of a
\emph{complexity class}. For instance, as far as we know (today)
\footnote{That is, we do not know of any ``arrows'' going from the class
\NP to \P.} the classes \P and \NP have this property.
(Also note that the class {\bf co-NP} has the property that only arrows
going from the class \P to {\bf co-NP} are known, while \emph{no}
arrows between the classes \NP and {\bf co-NP} are known. In other
words, {\bf co-NP} is ``harder'' than \P, but the relation between
\NP and {\bf co-NP} is unknown).
\section{Why is Complexity Interesting?}
All the processess in the brain can be seen as a computation. Thus,
THOUGHT is Computation.
Furthermore, Computation allows us to define many notions that are ``natural'' to us
but ``elusive'' mathematically. For instance, (as we will see later in the
course), Computation allows us to separate notions such as:
\BE
\item Easy vs. Hard
\item Proof vs. Theorem
\item Interaction in Proof (debate) vs. Written proof
\item Learning (by yourself) vs. Teaching
\EE
In fact, the question if $\P = \NP$ is the \emph{most interesting}
mathematical question! Since if $\P = \NP$ there are no ``interesting''
theorems in mathematics, as they can all be ``decided efficiently''.
If, on the other hand, $\P \neq \NP$, then there is at least one interesting
theorem. That is the question $\P$ vs. $\NP$ is ``mathematics-Complete'' !
(which in turn means that the course 6.841 is MIT-complete).
\section{Computational Problems}
One way to view a computational problem is as a function on bitstrings $x$ to
bitstrings $y$. This view can be generalized (from functions) to relations,
resulting in the following description of a general computational problem:
\BI
\item \emph{ Given a bitstring $x$, find a bitstring $y$ such that
$(x,y) \in R \subseteq \{0,1\}^* \times \{0,1\}^*$.}
\EI
In this course we will mainly focus on computational problems for
languages. Namely questions of the following kind:
\BI
\item \emph{Given $x\in \{0,1\}^*$, is $x\in L$ ?}.
\EI
Note that in general, every search problem can be reduced into a decision
problem. An example of this will be on the problem set.
\paragraph{Exercise:} Reduce the problem of (finding an assignment to)
SAT to the language of SAT.
\paragraph{Why will we focus on languages (rather than relations) ?}
The reason for focusing only on languages is that there is a
simple way of talking about reductions between problems for languages!
\section{Reductions}
Reduction allows us to formally compare the relative hardness of
different computational problems. There are two main types of reductions
in Complexity Theory:
\BI
\item {\bf Turing reductions.} The most general type of reduction
between problems (which works
for relations) is of the following type (this is called a many-many reduction
or Turing reduction).
\BI
\item $R_1 \rightarrow R_2$ (i.e., $R_1$ reduces to $R_2$):
Given a subroutine to solve $R_2$, give an algorithm to solve $R_1$.
\EI
The problem with this kind of reduction is that it is hard
to separate complexity classes using it. For instance, SAT reduces
to co-SAT using this reduction. (In fact co-SAT can be solved using
a SAT-oracle by simply forwading the instance to the oracle and negating
its output).
\item {\bf Karp reductions} Reduction between languages can be defined
in the following
(more stringent) way (this is called a many-one reduction or Karp reduction):
\BI
\item A reduction from $L_1$ to $L_2$ ($L_1 \rightarrow L_2$) is
an algorithm $T : \{0,1\}^* \rightarrow \{0,1\}^*$ such that
$$x\in L_1 \leftrightarrow T(x) \in L_2$$
\EI
\EI
We will use the latter kind of reduction in the sequel of this course.
\section{Classical vs. Modern Resources in Complexity Theory}
The classical resources investigated in Complexity theory
are Time, Space and Non-deterministic Time/Space.
In this course will will instead focus on the following
modern resources:
\BI
\item {\bf Randomness} (and getting rid of randomness from randomized algorithms).
Randomness has played a crucial role in modern Complexity Theory.
In recent years the focus has mostly been on removing randomness
from algorithms, or reductions. However, although the ``derandomized''
algorithms no longer make use of randomness, the actual algorithms
are very similar to the randomized ones.
\item {\bf Non-uniformity}
\item {\bf Alternation} Using Alternation we can show that the answer to at least
one of the following questions is NO:
\BE
\item SAT $\in$ Lin-Time ?
\item SAT $\in$ Logspace ?
\EE
(Note that we don't know if the answer to any single of the above questions
is no.)
\item {\bf Proofs, Interaction} We will, for instance, show that ``IP = PSPACE'', and present \emph{Probabilistically Checkable Proofs}.
\item {\bf Quantum Computation}
\EI
\section{A 10 Minute Review of 6.840}
\paragraph{When you are given more of one resource you can do more.} For example,
\BI
\item TIME$(n^2) \subset$ TIME$(n^3)$ . This is proven using diagonalization
techniques. On a high-level, one constructs a language that is in TIME($n^3$)
and that is different from all languages in TIME($n^2$).
This is done by simulating
all the machine in TIME($n^2$) and flipping the output bit on some inputs.
\item NTIME($n^2$) $\subset$ NTIME($n^3$). This was proved by Cook '74 using more
complicated techniques.
\EI
\paragraph{It seems hard to compare different resources}
\BI
\item \emph{Time/Space}: we have only a very ``loose'' bound on the
relation between space and time, namely
\BI
\item TIME$(t) \subseteq$ SPACE$(t) \subseteq $TIME$(2^t)$.
\EI
\item \emph{Time/NTime} The situation is analogous between deterministic
and non-deterministic time´, namely
\BI
\item TIME$(t) \subseteq $NTIME$(t) \subseteq$ TIME$(2^t)$.
\EI
The theory of NP-completeness also investigates the relation between
deterministic and non-deterministic time; the main open question being if
\NP = \P.
\item \emph{Space/NSpace} For the case of space vs. non-deterministic space
the bounds are nevertheless better,
\BI
\item SPACE$(s) \subseteq $NSPACE$ \subseteq $SPACE$(s^2)$ (Savitch's theorem)
\item LOGSPACE $\subseteq$ NLOG $\subseteq$ LOG$^2$
\item NSPACE$(s) \subseteq$ co-NSPACE$(O(s))$ (Immerman-Szelepcsényi)
\EI
\EI
\end{document}