David Alvarez-Melis

PhD Candidate, MIT Computer Science and Artificial Intelligence Lab

Stata Center, Bldg 32-G496, Cambridge MA 02139

d_alv_mel_[at]_mit_[dot]_edu (humans: remove underscores)

About

***I'm currently in the job market, looking for research positions starting on or around August 2019***.

I'm a PhD Candidate in the EECS department at MIT, working on Machine Learning and Natural Language Processing under the supervision of Tommi Jaakkola. My research revolves around learning in structured domains: from learning representations of structured objects, to generating them, to interpreting models that operate on them, to computing distances between them. The latter two aspects form the core themes of my PhD work:

Other topics that I have worked on in the past are:

Bio

I started my academic path at ITAM (Mexico City), where I obtained a Licenciatura (BSc.) in Applied Mathematics with a thesis on functional analysis under the supervision of Carlos Bosch. After that, I earned an MS in Math from Courant Institute (NYU), where I worked on semidefinite programming for domain adaptation with Mehryar Mohri. After this, and before joining MIT, I spent a year at IBM's T.J. Watson Research Center, working with Ken Church and others in the Speech Recognition Group.

During my PhD, I've interned twice at Microsoft Research (once at the Redmond Lab, once at the New York Lab), during which I've been fortunate to collaborate with a stellar group of mentors: Scott Yih, Ming-Wei Chang, Kristina Toutanova, Chris Meek, Hanna Wallach, Jenn Wortman Vaughan and Hal Daumé III.

News

Projects

Word Translation with Optimal Transport
OT-based approaches to fully unsupervised bilingual lexical induction
Optimal Transport with Local and Global Structure
Generalizing the OT problem to include local structure (or ignore global invariances).
Robustly Interpretable Machine Learning
Bridging the gap between model expressiveness and transparency
Towards a Theory of Word Embeddings
A theoretical framework to understand the semantic properties of word embeddings.
Word Translation with Optimal Transport
OT-based approaches to fully unsupervised bilingual lexical induction
Optimal Transport with Local and Global Structure
Generalizing the OT problem to include local structure (or ignore global invariances).
Robustly Interpretable Machine Learning
Bridging the gap between model expressiveness and transparency
Towards a Theory of Word Embeddings
A theoretical framework to understand the semantic properties of word embeddings.

Publications

Most recent publications on Google Scholar.

  • Selected
  • All

Towards Robust Interpretability with Self-Explaining Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

NIPS'18: Neural Information Processing Systems. 2018.

Gromov-Wasserstein Alignment of Word Embedding Spaces

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.

Paper arXiv Slides Poster Press Bib

Structured Optimal Transport

David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.

Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.

Paper Poster Bib

A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'17: Empirical Methods in Natural Language Processing. 2017.

Paper Suppl. arXiv Press Bib

Tree-structured Decoding with Doubly-recurrent Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

ICLR'17: International Conference on Learning Representations. 2017.

Paper arXiv Poster Code Bib

Word Embeddings as Metric Recovery in Semantic Spaces

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

TACL: Transactions of the Association for Computational Linguistics. 2016. (presented at ACL'16).

Paper arXiv Bib

Towards Robust Interpretability with Self-Explaining Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

NIPS'18: Neural Information Processing Systems. 2018.

Gromov-Wasserstein Alignment of Word Embedding Spaces

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.

Paper arXiv Slides Poster Press Bib

Game-theoretic Interpretability for Temporal Modeling

Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola

Fairness, Accountability, and Transparency in Machine Learning (@ICML 2018).

On the Robustness of Interpretability Methods

David Alvarez-Melis, Tommi S. Jaakkola

Workshop on Human Interpretability in Machine Learning (@ICML 2018).

Structured Optimal Transport

David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka

AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.

Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.

Paper Poster Bib

The Emotional GAN: Priming Adversarial Generation of Art with Emotion.

David Alvarez-Melis, Judith Amores

NIPS Workshop on Machine Learning for Creativity and Design. 2018.

Distributional Adversarial Networks

Chengtao Li*, David Alvarez-Melis*, Keyulu Xu, Stefanie Jegelka, Suvrit Sra

Preprint. 2017.

A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models

David Alvarez-Melis, Tommi S. Jaakkola

EMNLP'17: Empirical Methods in Natural Language Processing. 2017.

Paper Suppl. arXiv Press Bib

Tree-structured Decoding with Doubly-recurrent Neural Networks

David Alvarez-Melis, Tommi S. Jaakkola

ICLR'17: International Conference on Learning Representations. 2017.

Paper arXiv Poster Code Bib

Word Embeddings as Metric Recovery in Semantic Spaces

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

TACL: Transactions of the Association for Computational Linguistics. 2016. (presented at ACL'16).

Paper arXiv Bib

Topic Modeling in Twitter: Aggregating Tweets by Conversations

David Alvarez-Melis*, Martin Saveski*

ICWSM'16: International AAAI Conference on Web and Social Media. 2016. (Short Paper)

Paper Poster Bib

Word, graph and manifold embedding from Markov processes

Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola

NIPS 2015 Workshop on Nonparametric Methods for Large Scale Representation Learning. Oral presentation.

A translation of 'The characteristic function of a random phenomenon' by Bruno de Finetti

David Alvarez-Melis, Tamara Broderick

Translation. 2015

The Matrix Multiplicative Weights Algorithm for Domain Adaptation

David Alvarez-Melis (advisor: Mehryar Mohri)

MS Thesis, Courant Institute. 2013.

Lax-Milgram's Theorem: Generalizations and Applications

David Alvarez-Melis (advisor: Carlos Bosch Giral)

BSc Thesis, ITAM. 2011.

Teaching

Explaining is understanding.

The following extract is from David Goldstein's book on Feynman:

"Feynman was a truly great teacher. He prided himself on being able to devise ways to explain even the most profound ideas to beginning students. Once, I said to him, "Dick, explain to me, so that I can understand it, why spin one-half particles obey Fermi-Dirac statistics." Sizing up his audience perfectly, Feynman said, "I'll prepare a freshman lecture on it." But he came back a few days later to say, "I couldn't do it. I couldn't reduce it to the freshman level. That means we don't really understand it."

Some current and past courses I have TA'd:

Vitæ

Full CV in PDF (or a shorter Resumé).

  • Microsoft Research, NYC Summer 2018
    Research Intern
    Mentors: H. Wallach, J.W. Vaughan, H. Daume III
  • Microsoft Research, Redmond Summer 2016
    Research Intern
    Mentors: S. Yih, M.W. Chang, K. Toutanova, C. Meek
  • MIT CSAIL 2014 - now
    Ph.D. Student
    Machine Learning and NLP groups
  • IBM Research June 2013 - June 2014
    Supplemental Researcher
    Speech Recognition Group
  • Courant Institute, NYU Sep 2011 - May 2013
    MS Student
    Major: Mathematics
  • ITAM Jan 2006 - Feb 2011
    BSc Student
    Major: Applied Mathematics

Misc

Outside of research, I enjoy running, brewing beer and playing guitar. As if writing papers for a living weren't enough, I sporadically write non-academic stuff too (mostly short stories/poems in Spanish). I also like quotes. Here's a few more:

"We cannot solve our problems with the same thinking we used when we created them." - A. Einstein
"The real danger is not that computers will begin to think like men, but that men will begin to think like computers" - Syndey J. Harris

Meta

This website was built with jekyll based on a template by my [friend|co-author|ex-roommate] and all-around awesome person, Martin Saveski.