Professional service: PC member for ITCS'13, FOCS'14, CCC'15.
I study theoretical computer science. I have recently joined the CS Department of the University of Chicago as an Assistant Professor. (My U of C website is here. For the time being I'm maintaining both sites.) I encourage talented students to apply and/or get in touch with me about opportunities for graduate study.
Background: In 2012 I completed a PhD in the EECS Dept. at MIT. I'm very lucky to have had Scott Aaronson as my advisor. After that I spent two years as a postdoc with Avi Wigderson's group at IAS (Princeton, NJ), followed by one year as a postdoc working with Rahul Santhanam at the University of Edinburgh. Good times!
My research: I have broad interests, with a focus on computational complexity---the study of the inherent limits of efficient computation.
A main goal of my current work is to better understand the limits of some powerful algorithmic paradigms for solving NP-hard problems. One such paradigm is kernelization. A kernelization reduction is an algorithm that tries to "shrink" problem instances, a preprocessing step which makes exhaustive search more feasible. A second algorithmic paradigm I've studied is intelligent random guessing, in which one tries to guess a solution to the given problem in a clever way where one's success probability is larger than the naïve approach (one may then run many trials to get a solution with high probability, or decide none exists). I've shown new limits to the kind of algorithms we can hope to develop by either of these approaches. This work has (I feel) involved an intriguing mix of computational, probabilistic, and information-theoretic ideas.
Another major strand in complexity theory is the study of non-standard proof systems, which incorporate features like interaction with provers, probabilistic verification, and the manipulation of quantum states. These brain-bending proofs are surprisingly powerful, and I've helped to further illuminate their power.
Yet another theme that fascinates me is joint computation---the common situation in which we have multiple computational tasks to perform simultaneously. When can we cleverly combine computations to make them more efficient and reliable? Much of my work aims at better understanding the possibility and also the limits of such "computational synergies."
Outside of complexity theory, I've worked on problems in prediction and polynomial approximation. I also have a personal interest in using algorithmic ideas to better understand the power of human memory.