The primary focus of our lab is the science of modern machine learning. We aim to combine theoretical and empirical insights to build a principled and thorough understanding of key techniques in machine learning, such as deep learning, as well as the challenges we face in this context. A major theme in our investigations is rethinking machine learning from the perspective of security and robustness.
- We launched our blog. Check it out!
- We are looking for motivated MIT undergraduate students who would help us build up our infrastructure for deep learning experimentation. Ping us if you are interested!
- Check out our adversarial robustness challenges for MNIST and CIFAR10. Can you break our networks? (We couldn’t.)
Faculty: Aleksander Mądry
Graduate Students: Andrew Ilyas, Guillaume Leclerc, Aleksandar Makelov, Shibani Santurkar, Brandon Tran, Dimitris Tsipras, Kai Xiao
Undergraduate Students: Logan Engstrom, Samarth Gupta, Calvin Lee, Cynthia Liu, Tarek Mansour, Vivek Miglani, Nur Muhammad Shafiullah, Michael Sun, Loc Trinh, Alexander Turner, Abhinav Venigalla, Tony Wang, Andy Wei, Wendy Wei, Brandon Zeng, Jeffrey Zhang
Affiliated Researchers: Jerry Li, Ludwig Schmidt
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability,
Kai Xiao, Vincent Tjeng, Nur Muhammad Shafiullah, Aleksander Mądry.
Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors,
Andrew Ilyas, Logan Engstrom, Aleksander Mądry.
There Is No Free Lunch In Adversarial Robustness (But There Are Unexpected Benefits),
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Mądry.
How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift),
Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Mądry.
NIPS 2018 (oral presentation).
Adversarially Robust Generalization Requires More Data,
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Mądry.
NIPS 2018 (spotlight presentation).
A Classification-Based Study of Covariate Shift in GAN Distributions,
Shibani Santurkar, Ludwig Schmidt, Aleksander Mądry.
ICML 2018.Spotlight presentation at the Deep Learning: Bridging Theory and Practice workshop at NIPS 2017.
On the Limitations of First-Order Approximation in GAN Dynamics,
Jerry Li, Aleksander Mądry, John Peebles, Ludwig Schmidt (alphabetic order).
ICML 2018.Poster presentation at the Principled Approaches to Deep Learning workshop at ICML 2017.
A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations,
Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, Aleksander Mądry.
Poster presentation at the Machine Learning and Computer Security workshop at NIPS 2017.
Towards Deep Learning Models Resistant to Adversarial Attacks,
Aleksander Mądry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu (alphabetic order).
ICLR 2018.Oral presentation at the Principled Approaches to Deep Learning workshop at ICML 2017.