Yoon Kim
I am an assistant professor at MIT (EECS/CSAIL). I obtained my PhD in computer science from Harvard University, where I was advised by Alexander Rush.
yoonkim@mit.edu / CV / Google ScholarResearch
I work on natural language processing and machine learning. Current interests include:
- Efficient training and deployment of large-scale models
- Understanding the capabilities and limitations of language models
- Symbolic mechanisms for controlling and augmenting neural networks
Group
Postdocs
Hadeel Al-Negheimish
Lucas Torroba Hennigen
Tiwa Eisape (co-advised with Roger Levy)
Han Guo (co-advised with Eric Xing)
Ani Nrusimha
Abbas Zeitoun
Linlu Qiu
Zhaofeng Wu
Songlin Yang
Isha Puri (co-advised with Marzyeh Ghassemi)
Former Members
Bailin Wang (Postdoc --> Apple)
Teaching
6.S986: Large Language Models and Beyond (Spring 2024)
6.8610: Quantitative Methods for Natural Language Processing (Fall 2023)
Recent Papers [all publications]
- Parallelizing Linear Transformers with the Delta Rule over Sequence Length
Songlin Yang, Bailin Wang, Yu Zhang, Yikang Shen, Yoon Kim
NeurIPS 2024 [paper, code, slides]
- Learning to Decode Collaboratively with Multiple Language Models
Shannon Zejiang Shen, Hunter Lang, Bailin Wang, Yoon Kim, David Sontag
ACL 2024 [paper, code]
- What Do Language Models Hear? Probing for Auditory Representations in Language Models
Jerry Ngo, Yoon Kim
ACL 2024 [paper]
- Gated Linear Attention Transformers with Hardware-Efficient Training
Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, Yoon Kim
ICML 2024 [paper, slides, code]
- In-Context Language Learning: Architectures and Algorithms
Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas
ICML 2024 [paper, code]