1



research scientist at Google Brain







I am interested in designing high-performance machine learning methods that make sense to humans. Here is a short writeup about why I care.

My current focus is building interpretability method for already-trained models (e.g., high performance neural networks). In particular, I believe the language of explanations should include higher-level, human-friendly concepts.

Previously, I built interpretable latent variable models (featured at Talking Machines, and MIT news) and creating structured Bayesian models of human decisions. I have applied these ideas to data from various domains: computer programming education, autism spectrum discorder data, recipes, disease data, 15 years of crime data from the city of Cambridge, human dialogue data from the AMI meeting corpus, and text-based chat data during disaster response. I graduated with a PhD from CSAIL, MIT.

I gave a tutorial on Interpretable machine learning at ICML 2017, slides are here .

I am area chair and program chair at NIPS 2017, steering committee and area chair at FAT* conference , program committee at ICML 2017, AAAI 2017, IJCAI 2016.

I am an executive board member of Women in Machine Learning.

I have co-organized ICML 2016 Worshop on Human Interpretability in Machine Learning (WHI), the second ICML 2017 Worshop on Human Interpretability in Machine Learning (WHI). and NIPS 2016 Worshop on Interpretable Machine Learning for Complex Systems.



Google Scholar
LinkedIn



Publications

2

TCAV: Relative concept importance testing with Linear Concept Activation Vectors

TL;DR: We can learn human-concepts in any layer of already-trained neural networks. Then we can do hypothesis testing with them to get quantitative explanations.

Been Kim, Justin Gilmer, Fernanda Viegas, Ulfar Erlingsson, Martin Wattenberg
Arxiv 2017
[pdf] [bibtex]








2

The (Un)reliability of saliency methods

TL;DR: Existing saliency methods could be unreliable. We should be careful using them.

Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
NIPS workshop 2017 on Explaining and Visualizing Deep Learning
[pdf] [bibtex]





2

SmoothGrad: removing noise by adding noise

Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg
ICML workshop on Visualization for deep learning 2017
[pdf] [code] [bibtex]



2

QSAnglyzer: Visual Analytics for Prismatic Analysis of Question Answering System Evaluations

Nan-chen Chen and Been Kim
VAST 2017
[pdf] [bibtex]



2

Towards A Rigorous Science of Interpretable Machine Learning

Finale Doshi-Velez and Been Kim
arxiv 2017
[pdf] [bibtex]



2

Examples are not Enough, Learn to Criticize! Criticism for Interpretability

Been Kim, Rajiv Khanna and Sanmi Koyejo
Neural Information Processing Systems 2016
[pdf] [NIPS oral presentation talk slides] [talk video] [bibtex] [code]



2

Diff-clustering: Interpretable embedding example-based clustering

Been Kim, Peter Turney and Peter Clark
under review
[TBD]



2

Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction

Been Kim, Finale Doshi-Velez and Julie Shah
Neural Information Processing Systems 2015
[pdf] [variational inference in gory detail] [bibtex]



2

iBCM: Interactive Bayesian Case Model Empowering Humans via Intuitive Interaction

Been Kim, Elena Glassman, Brittney Johnson and Julie Shah
coming soon (see my thesis for details).
[video]


2

Bayesian Case Model:
A Generative Approach for Case-Based Reasoning and Prototype Classification

Been Kim, Cynthia Rudin and Julie Shah
Neural Information Processing Systems 2014
[pdf] [poster] [bibtex]

This work was featured on MIT news and MIT front page spotlight.

2

Scalable and interpretable data representation for
high-dimensional complex data

Been Kim, Kayur Patel, Afshin Rostamizadeh and Julie Shah
AAAI Conference on Artificial Intelligence 2015
[pdf] [bibtex]


2

A Bayesian Generative Modeling with Logic-Based Prior

Been Kim, Caleb Chacha and Julie Shah
Journal of Artificial Intelligence Research 2014
[pdf] [bibtex]


5

Learning about Meetings

Been Kim and Cynthia Rudin
Data Mining and Knowledge Discovery Journal 2014

[arxiv] [pdf] [bibtex]

This work was featured in Wall Street Journal.



6

Inferring Robot Task Plans from Human Team Meetings:
A Generative Modeling Approach with Logic-Based Prior

Been Kim, Caleb Chacha and Julie Shah
AAAI Conference on Artificial Intelligence 2013
[pdf] [bibtex] [video]

This work was featured in:
"Introduction to AI" course at Harvard (COMPSCI180: Computer science 182) by Barbara J. Grosz.
[Course website]
"Human in the loop planning and decision support" tutorial at AAAI15 by Kartik Talamadupula and Subbarao Kambhampati.
[slides From the tutorial]

7

Multiple Relative Pose Graphs for Robust Cooperative Mapping

Been Kim, Michael Kaess, Luke Fletcher, John Leonard, Abraham Bachrach, Nicholas Roy, and Seth Teller
International Conference on Robotics and Automation 2010
[pdf] [bibtex] [video]

7

Human-inspired Techniques for Human-Machine Team Planning

Julie Shah, Been Kim and Stefanos Nikolaidis
AAAI Technical Report - Human Control of Bioinspired Swarms 2013
[pdf] [bibtex]





Thesis

2

Interactive and Interpretable Machine Learning Models for Human Machine Collaboration

Been Kim
PhD Thesis 2015
[pdf] [bibtex] [slides]