Constantinos Daskalakis

    Professor, EECS, MIT

(image credits: Sarah A. King for this article)

I am a Professor at MIT's Electrical Engineering and Computer Science department, a member of CSAIL, and affiliated with LIDS and ORC. I am also an investigator in the Foundations of Data Science Institute (FODSI).

[Short Bio] [Honors and Awards]


Research Interests: Theory of Computation, and its interface with Economics, Game Theory, Machine Learning, Statistics and Probability Theory

teaching
my students
service & outreach


Complete Publications

Recent Publications/Preprints

Research Highlights

Links to Selected Talks, Videos, Slides

some general audience articles about my work

Selected Publications

Equilibrium Complexity
  • Constantinos Daskalakis, Paul W. Goldberg and Christos H. Papadimitriou: The Complexity of Computing a Nash Equilibrium.
    In the 38th ACM Symposium on Theory of Computing, STOC 2006.
    Journal version as SIAM Journal on Computing, 39(1), 195--259, May 2009. (Invited, special issue for STOC 2006.) pdf
    Expository article in Communications of the ACM 52(2):89--97, 2009. (Invited.) pdf
  • Constantinos Daskalakis: On the Complexity of Approximating a Nash Equilibrium.
    In the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011.
    ACM Transactions on Algorithms (TALG), 9(3): 23, 2013. Special Issue for SODA 2011. Invited. pdf
  • Constantinos Daskalakis and Christos Papadimitriou: Approximate Nash Equilibria in Anonymous Games.
    Journal of Economic Theory, 156:207--245, 2015. pdf
    Journal version of papers in FOCS 2007, FOCS 2008, and STOC 2009.
  • Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis: The Complexity of Constrained Min-Max Optimization.
    In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
  • Constantinos Daskalakis, Noah Golowich, Kaiqing Zhang: The Complexity of Markov Equilibrium in Stochastic Games.
    arXiv.

Equilibrium Computation and Machine Learning
  • Constantinos Daskalakis, Noah Golowich, Kaiqing Zhang: The Complexity of Markov Equilibrium in Stochastic Games.
    arXiv.
  • Constantinos Daskalakis, Maxwell Fishelson, Noah Golowich: Near-Optimal No-Regret Learning in General Games.
    In the 35th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2021. Oral. arXiv. twitter summary
  • Constantinos Daskalakis, Stratis Skoulakis, Manolis Zampetakis: The Complexity of Constrained Min-Max Optimization.
    In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary
  • Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, Haoyang Zeng: Training GANs with Optimism.
    In the 6th International Conference on Learning Representations, ICLR 2018. arXiv.
  • Noah Golowich, Sarath Pattathil, Constantinos Daskalakis: Tight last-iterate convergence rates for no-regret learning in multi-player games.
    In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv

Game Theory
  • Constantinos Daskalakis and Qinxuan Pan: A Counter-Example to Karlin's Strong Conjecture for Fictitious Play.
    In the 55th IEEE Symposium on Foundations of Computer Science, FOCS 2014. arxiv
  • Yang Cai, Ozan Candogan, Constantinos Daskalakis and Christos Papadimitriou: Zero-sum Polymatrix Games: A Generalization of Minmax.
    Mathematics of Operations Research, 41(2): 648-655, 2016. pdf

Truncated Statistics ∩ Machine Learning ∩ Missing Data
  • Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Efficient Statistics, in High Dimensions, from Truncated Samples.
    In the 59th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2018. arXiv.
  • Constantinos Daskalakis, Themis Gouleakis, Christos Tzamos, Manolis Zampetakis: Computationally and Statistically Efficient Truncated Regression.
    In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.
  • Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis: Truncated Linear Regression in High Dimensions.
    In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. arXiv

Statistical Physics ∩ Machine Learning ∩ Dependent Data
  • Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Siddhartha Jayanti: Generalization and learning under Dobrushin's condition.
    In the 32nd Annual Conference on Learning Theory, COLT 2019. arXiv.
  • Constantinos Daskalakis, Nishanth Dikkala, Ioannis Panageas: Regression from Dependent Observations.
    In the 51st Annual ACM Symposium on the Theory of Computing, STOC 2019. arXiv
  • Yuval Dagan, Constantinos Daskalakis, Nishanth Dikkala, Anthimos Vardis Kandiros: Learning Ising Models from One or Multiple Samples.
    In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv.

Phylogenetics:
  • Constantinos Daskalakis, Elchanan Mossel and Sebastien Roch: Optimal Phylogenetic Reconstruction.
    In the 38th ACM Symposium on Theory of Computing, STOC 2006. arXiv
    Journal version as Probability Theory and Related Fields, 149(1-2):149-189, 2011.

Coding Theory:
  • Sanjeev Arora, Constantinos Daskalakis and David Steurer: Message-Passing Algorithms and Improved LP Decoding.
    In the 41st ACM Symposium On Theory of Computing, STOC 2009.
    Journal version as IEEE Transactions on Information Theory, 58(12):7260--7271, 2012. pdf

Probability Theory:
  • Constantinos Daskalakis and Christos Papadimitriou: Sparse Covers for Sums of Indicators.
    Probability Theory and Related Fields, 162(3):679-705, 2015. arXiv
  • Constantinos Daskalakis, Anindya De, Gautam Kamath and Christos Tzamos: A Size-Free CLT for Poisson Multinomials and its Applications.
    In the 48th ACM Symposium on Theory of Computing, STOC 2016. arXiv
  • Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Concentration of Multilinear Functions of the Ising Model with Applications to Network Data.
    In the 31st Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2017. arXiv

Mechanism Design:
  • Yang Cai, Constantinos Daskalakis and Matt Weinberg: An Algorithmic Characterization of Multi-Dimensional Mechanisms.
    In the 44th ACM Symposium on Theory of Computing, STOC 2012. ECCC Report.
  • Yang Cai, Constantinos Daskalakis and Matt Weinberg: Understanding Incentives: Mechanism Design becomes Algorithm Design.
    In the 54th IEEE Symposium on Foundations of Computer Science, FOCS 2013. arxiv
  • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Mechanism Design via Optimal Transport.
    In the 14th ACM Conference on Electronic Commerce, EC 2013. Best Paper and Best Student Paper Award. pdf
  • Constantinos Daskalakis, Alan Deckelbaum and Christos Tzamos: Strong Duality for a Multiple-Good Monopolist.
    In the 16th ACM Conference on Economics and Computation (EC), EC 2015. arXiv
    Journal version as Econometrica, 85(3):735-767, 2017.
  • Yang Cai, Constantinos Daskalakis and Christos H. Papadimitriou: Optimum Statistical Estimation with Strategic Data Sources.
    In the 28th Annual Conference on Learning Theory (COLT), COLT 2015. arXiv
  • Constantinos Daskalakis, Christos Papadimitriou and Christos Tzamos: Does Information Revelation Improve Revenue?.
    In the 17th ACM Conference on Economics and Computation (EC), EC 2016. Invited to Special Issue. pdf
  • Constantinos Daskalakis and Vasilis Syrgkanis: Learning in Auctions: Regret is Hard, Envy is Easy.
    In the 57th IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2016. arxiv
  • Yang Cai and Constantinos Daskalakis: Learning Multi-Item Auctions with (or without) Samples.
    In the 58th IEEE Symposium on Foundations of Computer Science (FOCS), FOCS 2017. arxiv
  • Johannes Brustle, Yang Cai, Constantinos Daskalakis: Multi-Item Mechanisms without Item-Independence: Learnability via Robustness.
    In the 21st ACM Conference on Economics and Computation, EC 2020. arXiv

Property Testing ∩ Statistics:
  • Jayadev Acharya, Constantinos Daskalakis and Gautam Kamath: Optimal Testing for Properties of Distributions.
    In the 29th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2015. Spotlight Paper. arXiv
  • Constantinos Daskalakis and Qinxuan Pan: Square Hellinger Subadditivity for Bayesian Networks and its Applications to Identity Testing.
    In the 30th Annual Conference on Learning Theory, COLT 2017. arXiv
  • Bryan Cai, Constantinos Daskalakis and Gautam Kamath: Priv'IT: Private and Sample Efficient Identity Testing.
    In the 34th International Conference on Machine Learning, ICML 2017. arXiv
  • Constantinos Daskalakis, Gautam Kamath and John Wright: Which Distribution Distances are Sublinearly Testable?
    In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXiv
  • Constantinos Daskalakis, Nishanth Dikkala and Gautam Kamath: Testing Ising Models
    In the 29th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018. arXiv
    Journal version in IEEE Transactions on Information Theory, 65(11): 6829-6852, 2019. ieee

Other Machine Learning and Statistics:
  • Constantinos Daskalakis, Ilias Diakonikolas and Rocco A. Servedio: Learning Poisson Binomial Distributions.
    In the 44th ACM Symposium on Theory of Computing, STOC 2012. arXiv
    Algorithmica, 72(1):316-357, 2015. Special Issue on New Theoretical Challenges in Machine Learning. Invited. arXiv
  • Constantinos Daskalakis, Christos Tzamos and Manolis Zampetakis: Ten Steps of EM Suffice for Mixtures of Two Gaussians.
    In the 30th Annual Conference on Learning Theory, COLT 2017. Preliminary version presented at NeurIPS 2016 Workshop on Non-Convex Optimization for Machine Learning. arXiv
  • Constantinos Daskalakis, Christos Tzamos and Manolis Zampetakis: A Converse to Banach's Fixed Point Theorem and its CLS Completeness.
    In the 50th Annual ACM Symposium on the Theory of Computing, STOC 2018. arXiv
  • Constantinos Daskalakis, Dhruv Rohatgi, Manolis Zampetakis: Constant-Expansion Suffices for Compressed Sensing with Generative Priors.
    In the 34th Annual Conference on Neural Information Processing Systems (NeurIPS), NeurIPS 2020. Spotlight. arXiv
  • Constantinos Daskalakis, Qinxuan Pan: Sample-Optimal and Efficient Learning of Tree Ising models.
    In the 53rd ACM Symposium on Theory of Computing, STOC 2021. arXiv. twitter summary

some guiding principles: The Satrapy

What a misfortune, although you are made
for fine and great works
this unjust fate of yours always
denies you encouragement and success;
that base customs should block you;
and pettiness and indifference.
And how terrible the day when you yield
(the day when you give up and yield),
and you leave on foot for Susa,
and you go to the monarch Artaxerxes
who favorably places you in his court,
and offers you satrapies and the like.
And you accept them with despair
these things that you do not want.
Your soul seeks other things, weeps for other things;
the praise of the public and the Sophists,
the hard-won and inestimable Well Done;
the Agora, the Theater, and the Laurels.
How can Artaxerxes give you these,
where will you find these in a satrapy;
and what life can you live without these.

Constantine P. Cavafy (1910).

[Accessibility]