Swami Sankaranarayanan

s w a m i v i v @ m i t . e d u
Google Scholar / GitHub / Twitter

About me

I am on the job market -- looking for positions relating to AI safety / trustworthy AI initiatives!

I am a Postdoctoral Associate at MIT working with Phillip Isola and Marzyeh Ghassemi.

Previously, I was a Research Scientist at Butterfly Network where I was part of the deep learning team led by Nathan Silberman. I obtained my Ph.D under Prof. Rama Chellappa, for which I was awarded the ECE Distinguished Dissertation Award.

I am interested in what makes a Machine Learning system trustworthy. I study this question from varied perspectives:

  • Interpretable Auditing: Quantifying when ML systems fail and reasoning why they fail.
  • Communicating ML decisions: Uncertainty estimation as a tool to effectively communicating ML predictions.
  • Value Alignment: Align machine learning model development with human values / expertise.

News



Talk: Invited talk at

Johns Hopkins University

.
Talk: Invited talk at

AI Ethics team, Sony Research

.
Talk: Invited talk at

Cornell University

.
Workshop: Our workshop on Generative AI Challenges has been accepted at ICML'23!.
Talk: Invited talk at

University of Merced

.
Talk: Invited talk at

Stanford University

.
Press coverage: My work on uncertainty estimation is covered by MIT News.
Talk: Research seminar at

Deepmind, London

titled "Semantically Meaningful Uncertainty Estimates".
Talk: Research talk at

Google Research (PAIR team)

on our work on semantic uncertainty intervals.
NeurIPS 2022:

3 papers

at NeurIPS 2022: 1 main conference and 2 workshops!
Award: Received the Scholar Award to attend NeurIPS 2022!

Select Papers (Full list)


Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors
Thomas Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, Marzyeh Ghassemi
[Under Review] [also @ NeurIPS 2022 Workshop on Robustness in Sequence Modeling (

Spotlight talk

)]



Semantic uncertainty intervals for disentangled latent spaces
Swami Sankaranarayanan, Anastasios Angelopoulos, Stephen Bates, Yaniv Romoano, Phillip Isola
NeurIPS 2022.
[Paper] [Website] [Code] [Video]

Real world relevance of counterfactual generations
Swami Sankaranarayanan, Thomas Hartvigsen, Lauren Oakden-Rayner, Marzyeh Ghassemi, Phillip Isola
NeurIPS 2022 Workshop for Trustworthy and Responsible Machine Learning (TSRML).
[Paper]

Exploring Visual Prompts for Adapting Large-Scale Models
Hyojin Bahng, Ali Jahanian*, Swami Sankaranarayanan*, Phillip Isola
Technical Report, arXiv 2022.
[Paper] [Website] [Code]

Discrepancy Ratio: Evaluating Model Performance When Even Experts Disagree on the Truth
Igor Lovchinsky, Alon Daks, Israel Malkin, Pouya Samangouei, Ardavan Saeedi, Yang Liu, Swami Sankaranarayanan,
Tomer Gafner, Ben Sternlieb, Patrick Maher, Nathan Silberman
ICLR 2020.
[Paper]

Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion
Ryutaro Tanno, Ardavan Saeedi, Swami Sankaranarayanan, Daniel Alexander, Nathan Silberman
CVPR 2019.
[Paper] [Code]

MetaReg: Towards Domain Generalization using Meta-Regularization
Yogesh Balaji, Swami Sankaranarayanan, Rama Chellappa
NeurIPS 2018.
[Paper] [Code]

Learning from Synthetic Data (LSD): Addressing Domain Shift for Semantic Segmentation
Swami Sankaranarayanan*, Yogesh Balaji*, Arpit Jain, Sernam Lim, Rama Chellappa
CVPR 2018.

[Spotlight talk]


[Paper] [Code]

Generate To Adapt (GTA): Aligning Domains using Generative Adversarial Networks
Swami Sankaranarayanan*, Yogesh Balaji*, Carlos Castillo, Rama Chellappa
CVPR 2018.

[Spotlight talk]


[Paper] [Code]

Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms
P. Jonathon Phillips, Amy N. Yates , Ying Hu , Carina A. Hahn , Eilidh Noyes, Kelsey Jackson, Jacqueline G. Cavazos, Géraldine Jeckeln, Rajeev Ranjan, Swami Sankaranarayanan, Jun-Cheng Chen, Carlos D. Castillo, Rama Chellappa, David White, Alice J. O’Toole
Proceedings of the National Academy of Sciences (PNAS) 2018.


[Paper]