I'm a PhD Candidate in the EECS department at MIT, working on Machine Learning and Natural Language Processing under the supervision of Tommi Jaakkola. My research revolves around learning in structured domains: from learning representations of structured objects, to generating them, to interpreting models that operate on them, to computing distances between them. The latter two aspects form the core themes of my PhD work:
I started my academic path at ITAM (Mexico City), where I obtained a Licenciatura (BSc.) in Applied Mathematics with a thesis on functional analysis under the supervision of Carlos Bosch. After that, I earned an MS in Math from Courant Institute (NYU), where I worked on semidefinite programming for domain adaptation with Mehryar Mohri. After this, and before joining MIT, I spent a year at IBM's T.J. Watson Research Center, working with Ken Church and others in the Speech Recognition Group.
During my PhD, I've interned twice at Microsoft Research (once at the Redmond Lab, once at the New York Lab), during which I've been fortunate to collaborate with a stellar group of mentors: Scott Yih, Ming-Wei Chang, Kristina Toutanova, Chris Meek, Hanna Wallach, Jenn Wortman Vaughan and Hal Daumé III.
Most recent publications on Google Scholar.
Functional Transparency for Structured Data: a Game-Theoretic Approach,
Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S. Jaakkola
ICML'19: International Conference on Machine Learning.
Learning Generative Models across Incomparable Spaces
Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka
ICML'19: International Conference on Machine Learning.
Earlier version at R2L: NeurIPS'18 Workshop on Relational Representation Learning. Best Paper Award.
Towards Optimal Transport with Global Invariances
David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola
AISTATS'19: Artificial Intelligence and Statistics. 2019.
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis, Tommi S. Jaakkola
NeurIPS'18: Neural Information Processing Systems. 2018.
Gromov-Wasserstein Alignment of Word Embedding Spaces
David Alvarez-Melis, Tommi S. Jaakkola
EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.
Structured Optimal Transport
David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka
AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.
Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.
A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models
David Alvarez-Melis, Tommi S. Jaakkola
EMNLP'17: Empirical Methods in Natural Language Processing. 2017.
Tree-structured Decoding with Doubly-recurrent Neural Networks
David Alvarez-Melis, Tommi S. Jaakkola
ICLR'17: International Conference on Learning Representations. 2017.
Word Embeddings as Metric Recovery in Semantic Spaces
Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola
TACL: Transactions of the Association for Computational Linguistics. 2016. (presented at ACL'16).
Functional Transparency for Structured Data: a Game-Theoretic Approach,
Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S. Jaakkola
ICML'19: International Conference on Machine Learning.
Learning Generative Models across Incomparable Spaces
Charlotte Bunne, David Alvarez-Melis, Andreas Krause, Stefanie Jegelka
ICML'19: International Conference on Machine Learning.
Earlier version at R2L: NeurIPS'18 Workshop on Relational Representation Learning. Best Paper Award.
Towards Robust, Locally Linear Deep Networks
Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola
ICLR'19: International Conference on Learning Representations. 2019.
Towards Optimal Transport with Global Invariances
David Alvarez-Melis, Stefanie Jegelka, Tommi S. Jaakkola
AISTATS'19: Artificial Intelligence and Statistics. 2019.
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis, Tommi S. Jaakkola
NeurIPS'18: Neural Information Processing Systems. 2018.
Gromov-Wasserstein Alignment of Word Embedding Spaces
David Alvarez-Melis, Tommi S. Jaakkola
EMNLP'18: Empirical Methods in Natural Language Processing. 2018. Oral Presentation.
Game-theoretic Interpretability for Temporal Modeling
Guang-He Lee, David Alvarez-Melis, Tommi S. Jaakkola
Fairness, Accountability, and Transparency in Machine Learning (@ICML 2018).
On the Robustness of Interpretability Methods
David Alvarez-Melis, Tommi S. Jaakkola
Workshop on Human Interpretability in Machine Learning (@ICML 2018).
Structured Optimal Transport
David Alvarez-Melis, Tommi S. Jaakkola, Stefanie Jegelka
AISTATS'18: Artificial Intelligence and Statistics. 2018. Oral Presentation.
Earlier version at NIPS Workshop on Optimal Transport for Machine Learning, 2017, as Extended Oral.
The Emotional GAN: Priming Adversarial Generation of Art with Emotion.
David Alvarez-Melis, Judith Amores
NIPS Workshop on Machine Learning for Creativity and Design. 2018.
Distributional Adversarial Networks
Chengtao Li*, David Alvarez-Melis*, Keyulu Xu, Stefanie Jegelka, Suvrit Sra
Preprint. 2017.
A Causal Framework for Explaining the Predictions of Black-Box Sequence-to-Sequence Models
David Alvarez-Melis, Tommi S. Jaakkola
EMNLP'17: Empirical Methods in Natural Language Processing. 2017.
Tree-structured Decoding with Doubly-recurrent Neural Networks
David Alvarez-Melis, Tommi S. Jaakkola
ICLR'17: International Conference on Learning Representations. 2017.
Word Embeddings as Metric Recovery in Semantic Spaces
Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola
TACL: Transactions of the Association for Computational Linguistics. 2016. (presented at ACL'16).
Topic Modeling in Twitter: Aggregating Tweets by Conversations
David Alvarez-Melis*, Martin Saveski*
ICWSM'16: International AAAI Conference on Web and Social Media. 2016. (Short Paper)
Word, graph and manifold embedding from Markov processes
Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola
NIPS 2015 Workshop on Nonparametric Methods for Large Scale Representation Learning. Oral presentation.
A translation of 'The characteristic function of a random phenomenon' by Bruno de Finetti
David Alvarez-Melis, Tamara Broderick
Translation. 2015
The Matrix Multiplicative Weights Algorithm for Domain Adaptation
David Alvarez-Melis (advisor: Mehryar Mohri)
MS Thesis, Courant Institute. 2013.
Lax-Milgram's Theorem: Generalizations and Applications
David Alvarez-Melis (advisor: Carlos Bosch Giral)
BSc Thesis, ITAM. 2011.
Explaining is understanding.
The following extract is from David Goldstein's book on Feynman:
"Feynman was a truly great teacher. He prided himself on being able to devise ways to explain even the most profound ideas to beginning students. Once, I said to him, "Dick, explain to me, so that I can understand it, why spin one-half particles obey Fermi-Dirac statistics." Sizing up his audience perfectly, Feynman said, "I'll prepare a freshman lecture on it." But he came back a few days later to say, "I couldn't do it. I couldn't reduce it to the freshman level. That means we don't really understand it."
Some current and past courses I have TA'd:
Full CV in PDF (or a shorter Resumé).
Outside of research, I enjoy running, brewing beer and playing guitar. As if writing papers for a living weren't enough, I sporadically write non-academic stuff too (mostly short stories/poems in Spanish). I also like quotes. Here's a few more:
"We cannot solve our problems with the same thinking we used when we created them." - A. Einstein
"The real danger is not that computers will begin to think like men, but that men will begin to think like computers" - Syndey J. Harris
This website was built with jekyll based on a template by my [friend|co-author|ex-roommate] and all-around awesome person, Martin Saveski.