Avatar

Leilani H. Gilpin

Research Scientist

Sony AI

MIT CSAIL

I am a research scientist at Sony AI working on explainability in AI agents. I recently graduated with my PhD in Electrical Engineering and Computer Science at MIT in CSAIL, where I continue as a collaborating researcher. My research focuses on the theories and methodologies towards monitoring, designing, and augmenting machines that can explain themselves for diagnosis, accountability, and liability. My long-term research vision is for self-explaining, intelligent, machines by design. During my PhD, I developed “Anomaly Detection through Explanations” or ADE, a self-explaining, full system monitoring architecture to detect and explain inconsistencies in autonomous vehicles. This allows machines and other complex mechanisms to be able to interpret their actions and learn from their mistakes.

Interests

  • Explainable AI (XAI)
  • Anomaly Detection
  • Commonsense Reasoning
  • Anticipatory Thinking for Autonomy
  • Semantic Representations of Language
  • Story-enabled intelligence
  • AI & Ethics

Education

  • PhD in Electrial Engineering and Computer Science, 2020

    Massachusetts Institute of Technology

  • M.S. in Computational and Mathematical Engineering, 2013

    Stanford University

  • BSc in Computer Science, BSc in Mathematics, Music minor, 2011

    UC San Diego

News

  • February 2021: I will be giving an invited talk on “Anticipatory Thinking: a Testing and Representation Challenge for Self-Driving Cars” at the 55th Annual Conference on Information Sciences and Systems.
  • January 2021: I will be on a panel about “Linking Knowledge in the Earth and Space Sciences: Knowledge Graphs/Networks connecting data and individuals” at the ESIP 2021 Winter Meeeting.
  • December 2020: I will be giving a tech talk on my PhD thesis work at NeurIps.
  • October 2020: I have been accepted as a Rising Star in EECS.
  • September 2020: I started working at Sony AI.
  • August 2020: My PhD dissertation was submitted and accepted.
  • June 2020: I passed my PhD Defense!
  • May 2020: Building on the success of the 2019 AAAI Fall Symposium, I’m helping define Antipatory Thinking challenge problems. Learn more in our proposal and survey.
  • May 2020: I will be giving a talk on “Monitoring Opaque Learning Systems” at the ICML Workshop on Monitoring and Deploying ML.
  • May 2020: I gave a seminar about XAI on May 5th in CS 520: Knowledge Graphs. Recording and slides are available.
  • March 2020: I will be presenting a poster at the Women in Data Science (WiDS) in Cambridge.
  • February 2020: My paper on “Explaining Possible Futures for Robust Autonomous Decision Making” will be published in the COGSAT ‘19 proceedings.
  • January 2020: My CSAIL Alliances spotlight video spotlight video is available.

Publications

Anomaly Detection Through Explanations

Under most conditions, complex machines are imperfect. When errors occur, as they inevitably will, these machines need to be able to …

Explaining Possible Futures for Robust Autonomous Decision-Making

Learning From Explanations for Robust Autonomous Driving.

An Adaptable Self-Monitoring Framework for Opaque Machines

Monitoring Opaque Learning Systems

Recent & Upcoming Talks

Featured talks are available as videos.

Identifying Multimodal Errors Through Explanations

In this talk, I present new methodologies for detecting and explaining errors in complex systems. My novel contribution is a …

Anomaly Detection Through Explanations

Explaining Explanations

Explanation-based Anomaly Detection

CSAIL Student Profile

Teaching

Lead Instructor

Lectures

Teaching Assistant

  • MIT - 6.905/6.945: Large-scale Symbolic Systems
  • Stanford University - CS 348A: Geometric Modeling (PhD Level Course)
  • UC San Diego - COGS 5A (beginning java), CSE 8A/8B (beginning java), CSE 5A (beginning C), CSE 21 (discrete mathematics), CSE 100 (Advanced Data Structures), CSE 101 (Algorithms)

Projects

AI and ethics

The AI and ethics reading group is a student-lead, campus-wide initiative.

Explanatory Games

Using internal symbolic, explanatory representations to robustly monitor agents.

Monitoring Decision Systems

An adaptable framework to supplement decision making systems with commonsense knowledge and reasonableness rules.

The Car Can Explain!

The methdologies and underlying technologies to allow self-driving cars and other AI-driven systems to explain behaviors and failures.

Miscellaneous

Academic Interests as a Bookshelf

  • Sylvain Bromberger - On What We Know We Don’t Know
  • Yuel Noah Harari - Sapiens
  • Marvin Minsky - The Emotion Machine
  • Roger Schank - Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures
  • Patick C. Suppes - Introduction to Logic

Note: This is a working list. It is inspired by my colleague. Let’s pass it along.

Other Happenings

Contact

  • lhg@mit.edu
  • MIT CSAIL 32-G530, 32 Vassar Street, Cambridge, MA 02139