-
The Orthogonal Vectors Conjecture and Non-Uniform Circuit Lower Bounds.
R. Williams.
[ECCC]
In 66th Annual Symposium on Foundations of Computer Science (FOCS 2024). Invited to the special issue for FOCS.
Summary: The Orthogonal Vectors Conjecture is one of the primary hardness hypotheses in fine-grained complexity. I show that the Orthogonal Vectors Conjecture implies non-uniform circuit complexity results; this is surprising, because the conjecture is about the hardness of uniform algorithms, and it is typically much more difficult to prove non-uniform lower bounds compared to uniform ones. For example, the Orthogonal Vectors Conjecture implies that 2-CNF formulas on $n$ variables do not have non-uniform depth-two exact threshold circuits of size $2^{o(n)}$. This new connection implies new forms of "win-win" circuit lower bounds. Using practical SAT/SMT solvers, I suggest a new way to attack the Orthogonal Vectors Conjecture (and possibly refute it). That is, we can potentially use practical SAT/SMT solvers to improve the worst-case complexity of SAT!
-
Beating Brute Force for Compression Problems.
S. Hirahara, R. Ilango, and R. R. Williams.
[ECCC]
[Quanta magazine]
In 56th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2024).
Summary: Given any string, we can determine if there is a short program of at most $\ell$ bits long which will efficiently print the string in time $t$, using substantially more efficient circuits than the obvious brute-force method. In particular, brute-force gives us circuits of about $2^{\ell}\cdot t$ size, and we show that there are circuits of about $2^{4\ell/5} \cdot t$ size. Our tools are hashing tricks and circuit tricks: one component is to show that the Fiat-Naor data structure for function inversion (designed for random-access models of computation) can be implemented non-trivially with standard Boolean circuits. Note that Mazor and Pass proved similar results in ITCS'24.
-
Self-Improvement for Circuit Analysis Problems.
R. Williams.
[ECCC]
[pdf]
In 56th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2024).
Summary: This paper describes a phenomenon that I call "self-improvement": given a circuit analysis algorithm (SAT, $\#$SAT, QSAT) that is "somewhat efficient" (say, running in $1000^k + n^{1+o(1)}$ time on circuits of size $n$ with $k$ inputs), one can apply the algorithm to itself in a non-trivial way, to obtain more efficient algorithms (running in $(1.0001)^k + n^{1+o(1)}$ time, for example, which would contradict the Exponential Time Hypothesis). This phenomenon has interesting implications for both circuit complexity and fine-grained complexity. For example, I use it to show that the Circuit Evaluation problem requires $n^{1.1}$ size depth-two symmetric circuits (in the uniform setting).
-
Towards Stronger Depth Lower Bounds.
G. Bathie and R. R. Williams.
[LIPIcs]
In 15th Annual Innovations in Theoretical Computer Science Conference (ITCS 2024).
Summary: The best known depth lower bounds for an explicit function in $NP$ have the form $(3-o(1))\log_2 n$. Similarly, the best formula size lower bound is only $n^{3-o(1)}$. We make progress on the problem of improving this factor of 3, in two different ways. First, we show that slightly faster $\#$SAT algorithms for formulas of size only $n^{2+2\varepsilon}$ would imply new lower bounds against formulas of $n^{3+\varepsilon}$ size, for any $\varepsilon > 0$. This is interesting in that the circuit-analysis algorithm only has to work for "small" formulas, in order to achieve a lower bound for "large" formulas. Second, we prove that the SAT problem requires uniform NAND circuits of depth at least $4 \cos(\pi/7) \log_2 n \geq 3.603 \log_2 n$. (Recall that the best-known time lower bound for solving SAT with $n^{o(1)}$-space algorithms is $n^{2\cos(\pi/7)} \geq n^{1.801}$, proved in CCC'07.)
-
A VLSI Circuit Model Accounting for Wire Delay.
C. Jin, R. R. Williams, and N. Young.
[LIPIcs]
In 15th Annual Innovations in Theoretical Computer Science Conference (ITCS 2024).
Summary: Engineers are increasingly turning to highly-parallel custom VLSI chips to implement expensive computations. Motivated by this development, we revisit the theory of VLSI. In VLSI design, the gates and wires of a logical circuit are placed on a 2-dimensional chip with a small number of layers. Traditional VLSI models use gate delay to measure the time complexity of the chip, ignoring the lengths of wires. However, as technology has advanced, wire delay has also become an important measure. We define and study the wire-delay VLSI model for this purpose, proving nearly tight upper and lower bounds on the time delay of chips for solving several basic problems.