Explanatory Machines

INTELLIGENT MACHINES THAT CAN INTERPRET THEIR ACTIONS AND UNDERSTAND THE BEHAVIOR OF THEIR UNDERLYING PARTS

In order to be intelligent, machines need to have the ability to explain themselves. My research is dedicated to creating the underlying technology and strategies for autonomous machines to be safe and intelligent. Explanations are able to interpret their actions and learn from their mistakes. ​

About Me

Researcher | PhD Student | Engineer | Hacker

I'm a PhD Student in the Department of Electrical Engineering and Computer Science (EECS-Course 6) and the Artificial Intelligence Lab (CSAIL) at MIT working under the supervision of Professor Gerald Jay Sussman. My research is in the area of Artificial Intelligence, where I am working to help autonomous vehicles (and other autonomous machines) to explain themselves. Before returning to academia, I worked at Palo Alto Research Center as a Member of Technical Staff where I worked on anomaly detection in healthcare. I received an M.S. student in Computational and Mathematical Engineering at Stanford University, and a B.S. in Computer Science with Highest Honors, a B.S. in Mathematics with Honors, and a Music Minor at UC San Diego. Feel free to engage with me below!

Research

The Car Can Explain!

As autonomous machines start to take control of decisions previously entrusted to humans, there will be a need for these complex machine to explain themselves. The ability to provide coherent explanations of complex behavior is also important in the design and debugging of such systems, and it is essential to give us all confidence in the competence and integrity of our automatic helpers.

View Project

Reasonableness Monitors

As a first step towards constraining perception mechanisms to commonsense judgment, we have developed reasonableness monitors: a wrapper interface that can explain if the output of an opaque deep neural network is reason- able. These monitors are a stand-alone system that use careful dependency tracking, commonsense knowledge, and conceptual primitives to explain a perceived scene description to be reasonable or not. If such an explanation cannot be made, it is evidence that either a part has failed (or was subverted) or the communication has failed. The development of reasonableness monitors is work towards generalizing that vision, with the intention of developing a system-construction methodology that enhances robustness at runtime (not static compile time), by dynamic checking and explaining of the behaviors of scene understanders for reasonableness in context.


Explaining Explainability

There has recently been a surge of work in explanatory artificial intelligence (XAI). XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.


Gossip Patrol

Can a computer perceive images and tell stories about what it has processed? Stay tuned to fine out....


My affilations and groups

  • MIT Computer Science and Artificial Intelligence Lab
  • Machine Understanding Group
  • CSAIL-Toyota Car Can Explain! Project
  • Genesis Story Understanding Group
  • Internet Policy Reserach Initiative (IPRI) at MIT

Publications

Articles in Review

L.H. Gilpin, D. Bau, B.Z. Yuan, A. Bajawal, M. Specter, and L. Kagal. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning.

L.H. Gilpin, J.C. Macbeth, and E. Florentine. Monitoring Scene Understanders with Conceptual Primitive Decomposition and Commonsense Knowledge.

Peer-reviewed Papers

L.H. Gilpin, D. Olson and T. Alrashed. Perception of Speaker Personality Traits Using Speech Signals. CHI 2018 - Late Breaking Reports. Online.

L.H. Gilpin, C. Zaman, D. Olson, and B.Z. Yuan. Simulating Human Explanations of Visual Scene Understanding. Human Robot Interaction (HRI) 2018. Online.

L.H. Gilpin. Reasonableness Monitors. The 23rd AAAI/SIGAI Doctoral Consortium (DC) at AAAI- 18. [To appear in proceedings].

L.H. Gilpin and B. Yuan. Getting Up to Speed on Vehicle Intelligence. Proceedings of the AAAI Spring Symposium Series, 2017. http://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/ 15322.

J. Agosti, L. Gilpin, G. Dang, and A. Bose. The VEICL Act: A Proposal for Safety and Security in Modern Vehicles. The Willamette Law Review. Volume 53, No. 2. Spring 2017.

J. Liu, E. Bier, A. Wilson, T. Honda, S. Kumar, L. Gilpin, J. Guerra-Gomez, and D. Davis. Graph Analysis for Detecting Fraud, Waste, and Abuse Detection in Healthcare Data. The Twenty-Seventy Conference on Innovative Application of Artificial Intelligence (IAAI-15).

L. Gilpin, L. Ciarletta, Y. Presse, V. Chevrier, and V. Galtier. Co-Simulation Solutions Us- ing AA4MM-FMI Applied to Smart Space Heating Models. SIMUTOOLS 2014. 10.4108/icst. simutools.2014.254633

ArXiv Preprints

L.H. Gilpin, D. Bau, B.Z. Yuan, A. Bajawal and L. Kagal. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning.

Technical Reports

Gilpin, L. and Yang, Qian. Improving BPdual Reliability Using Householder. Course project, MS&E 318: Large-Scale Numerical Optimization, Stanford University. June 2013.

Gilpin, L. Parallelizing Processes to Minimize Length of Stay in the ER. Course project report, MS&E 292: Health Policy Modeling, Stanford University. March 2013.

Bergen, Karianne and Gilpin, Leilani. Negative News No More: Classifying News Article Head- lines. Course project report, CS 229: Machine Learning, Stanford University. December 2012. http://cs229.stanford.edu/proj2012/BergenGilpin-NegativeNewsNoMore.pdf

Gilpin, Leilani. Visualizing NEES Activities Using Web Services and Object Relational Mapping. Tech- nical Report. August 2009. http://nees.org/site/resources/pdfs/REU2009_Gilpin_Paper.pdf

Teaching

Lead Instructor

  • Artificial Intelligence and Global Risks (IAP 2018)
  • SMASH Institute - Calculus (Summer 2015)

Lectures

  • 6.S978 (Privacy Legislation in Practice: Law and Technology)

Teaching Assistant

  • Stanford University - CS 348A : Geometric Modeling (PhD Level Course)
  • UC San Diego - COGS 5A (beginning java), CSE 8A/8B (beginning java), CSE 5A (beginning C), CSE 21 (discrete mathematics), CSE 100 (Advanced Data Structures), CSE 101 (Algorithms)

Life-long Learning

Massachusetts Institute of Technology

Doctor of Philosophy in EECS
Computer Science

Advisor - Gerald Jay Sussman

2020 (expected)

Stanford University

Masters in Computational Mathematical Engineering

2013

UC San Diego

Bachelor's Degree

B.S. in Computer Science with Highest Honors, B.S in Mathematics with Honors, Minor in Music

2011

CV

Skills

Programming Languages & Tools
Workflow
  • Mobile-First, Responsive Design
  • Cross Browser Testing & Debugging
  • Cross Functional Teams
  • Agile Development & Scrum

Contact and Interests

My office is located at 32 Vassar Street in the Stata Center. My office area is currently underst construction, so you can find me on the 8th floor on the G-side. Feel free to send me a message

Apart from being a researcher, I enjoy most of my time being outdoors. I'm an avid rower, swimmer, and hiker. I also enjoy experimenting with amateur photography.

When forced indoors, I enjoy knitting, reading, and a couple television shows, including Westworld and Silicon Valley. I am also a Graduate Resident Tutor, where I enjoy cooking and baking for my residents.

Online Presense