G630, 32 Vassar Street
Stata Center, MIT
Cambridge, MA 02139
shibani AT mit DOT eduMy CV (last updated 09/02/2020)
I am a PhD student in Computer Science at MIT, where I am fortunate to be co-advised by Aleksander Madry and Nir Shavit. My goal is to develop machine learning tools that are robust, reliable and ready for real-world deployment. Specifically, my research has been focused on two broad themes: developing a precise understanding of the functioning of widely-used deep learning techniques; and avenues to make machine learning methods robust and secure from an adversarial viewpoint.
Before coming to MIT, I graduated from Indian Institute of Technology Bombay in 2015 with a Dual Degree (Bachelors and Masters) in Electrical Engineering. For my Master's Thesis, I worked with Bipin Rajendran on artificial neural networks.
During Summer '19, I attended the Foundations of Deep Learning Progam at the Simons Institute. I spent the summer of 2018 at Google Brain, working with Ilya Mironov on differentially private generative models. In Summer '17, I was an intern at Vicarious with Huayan Wang.
I co-organized a workshop in ICLR 2020 on Trustworthy ML with Nicolas Papernot, Florian Tramèr, Carmela Troncoso and Nicholas Carlini.
We recently released our codebase for training and experimenting with (robust) models.
I am honored to be a recipient of the Google PhD Fellowship in Machine Learning (2019).
From ImageNet to Image Classification: Contextualizing Progress on Benchmarks
Dimitris Tsipras*, Shibani Santurkar*, Logan Engstrom, Andrew Ilyas, Aleksander Madry
Identifying Statistical Bias in Dataset Replication
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry
Implementation Matters in Deep RL: A Case Study on PPO and TRPO
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry
ICLR 2020 (Oral Presentation)
[Blog posts: part 1 and part 2]
Learning Perceptually-Aligned Representations via Adversarial Robustness
Logan Engstrom*, Andrew Ilyas*, Shibani Santurkar*, Dimitris Tsipras*, Brandon Tran*, Aleksander Madry
[Blog post], [Code]
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas*, Shibani Santurkar*, Dimitris Tsipras*, Logan Engstrom*, Brandon Tran, Aleksander Madry
NeurIPS 2019 (Spotlight Presentation)
[Blog post], [Datasets]
Robustness May Be at Odds with Accuracy
Dimitris Tsipras*, Shibani Santurkar*, Logan Engstrom*, Alexander Turner, Aleksander Madry
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, Aleksander Madry
NeurIPS 2018 (Spotlight Presentation)
A Classification-Based Study of Covariate Shift in GAN Distributions
Shibani Santurkar, Ludwig Schmidt, Aleksander Madry
Deep Tensor Convolution on Multicores
David Budden, Alex Matveev, Shibani Santurkar, Shraman Chaudhari, Nir Shavit
* Equal Contribution