In order to be intelligent, machines need to have the ability to explain themselves. My research is dedicated to creating the underlying technology and strategies for autonomous machines to be safe and intelligent. Explanations are able to interpret their actions and learn from their mistakes.
I'm a PhD Student in the Department of Electrical Engineering and Computer Science (EECS-Course 6) and the Artificial Intelligence Lab (CSAIL) at MIT working under the supervision of Professor Gerald Jay Sussman. My research is in the area of Artificial Intelligence, where I am working to help autonomous vehicles (and other autonomous machines) to explain themselves. Before returning to academia, I worked at Palo Alto Research Center as a Member of Technical Staff where I worked on anomaly detection in healthcare. I received an M.S. student in Computational and Mathematical Engineering at Stanford University, and a B.S. in Computer Science with Highest Honors, a B.S. in Mathematics with Honors, and a Music Minor at UC San Diego. Feel free to engage with me below!
As autonomous machines start to take control of decisions previously entrusted to humans, there will be a need for these complex machine to explain themselves. The ability to provide coherent explanations of complex behavior is also important in the design and debugging of such systems, and it is essential to give us all confidence in the competence and integrity of our automatic helpers.View Project
As a first step towards constraining perception mechanisms to commonsense judgment, we have developed reasonableness monitors: a wrapper interface that can explain if the output of an opaque deep neural network is reason- able. These monitors are a stand-alone system that use careful dependency tracking, commonsense knowledge, and conceptual primitives to explain a perceived scene description to be reasonable or not. If such an explanation cannot be made, it is evidence that either a part has failed (or was subverted) or the communication has failed. The development of reasonableness monitors is work towards generalizing that vision, with the intention of developing a system-construction methodology that enhances robustness at runtime (not static compile time), by dynamic checking and explaining of the behaviors of scene understanders for reasonableness in context.
There has recently been a surge of work in explanatory artificial intelligence (XAI). XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.
Can a computer perceive images and tell stories about what it has processed? Stay tuned to fine out....
L.H. Gilpin, D. Bau, B.Z. Yuan, A. Bajawal, M. Specter, and L. Kagal. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning.
L.H. Gilpin, J.C. Macbeth, and E. Florentine. Monitoring Scene Understanders with Conceptual Primitive Decomposition and Commonsense Knowledge.
L.H. Gilpin, D. Olson and T. Alrashed. Perception of Speaker Personality Traits Using Speech Signals. CHI 2018 - Late Breaking Reports. Online.
L.H. Gilpin, C. Zaman, D. Olson, and B.Z. Yuan. Simulating Human Explanations of Visual Scene Understanding. Human Robot Interaction (HRI) 2018. Online.
L.H. Gilpin. Reasonableness Monitors. The 23rd AAAI/SIGAI Doctoral Consortium (DC) at AAAI- 18. [To appear in proceedings].
L.H. Gilpin and B. Yuan. Getting Up to Speed on Vehicle Intelligence. Proceedings of the AAAI Spring Symposium Series, 2017. http://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/ 15322.
J. Agosti, L. Gilpin, G. Dang, and A. Bose. The VEICL Act: A Proposal for Safety and Security in Modern Vehicles. The Willamette Law Review. Volume 53, No. 2. Spring 2017.
J. Liu, E. Bier, A. Wilson, T. Honda, S. Kumar, L. Gilpin, J. Guerra-Gomez, and D. Davis. Graph Analysis for Detecting Fraud, Waste, and Abuse Detection in Healthcare Data. The Twenty-Seventy Conference on Innovative Application of Artificial Intelligence (IAAI-15).
L. Gilpin, L. Ciarletta, Y. Presse, V. Chevrier, and V. Galtier. Co-Simulation Solutions Us- ing AA4MM-FMI Applied to Smart Space Heating Models. SIMUTOOLS 2014. 10.4108/icst. simutools.2014.254633
L.H. Gilpin, D. Bau, B.Z. Yuan, A. Bajawal and L. Kagal. Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning.
Gilpin, L. and Yang, Qian. Improving BPdual Reliability Using Householder. Course project, MS&E 318: Large-Scale Numerical Optimization, Stanford University. June 2013.
Gilpin, L. Parallelizing Processes to Minimize Length of Stay in the ER. Course project report, MS&E 292: Health Policy Modeling, Stanford University. March 2013.
Bergen, Karianne and Gilpin, Leilani. Negative News No More: Classifying News Article Head- lines. Course project report, CS 229: Machine Learning, Stanford University. December 2012. http://cs229.stanford.edu/proj2012/BergenGilpin-NegativeNewsNoMore.pdf
Gilpin, Leilani. Visualizing NEES Activities Using Web Services and Object Relational Mapping. Tech- nical Report. August 2009. http://nees.org/site/resources/pdfs/REU2009_Gilpin_Paper.pdf
Advisor - Gerald Jay Sussman
B.S. in Computer Science with Highest Honors, B.S in Mathematics with Honors, Minor in Music
My office is located at 32 Vassar Street in the Stata Center. My office area is currently underst construction, so you can find me on the 8th floor on the G-side. Feel free to send me a message
Apart from being a researcher, I enjoy most of my time being outdoors. I'm an avid rower, swimmer, and hiker. I also enjoy experimenting with amateur photography.
When forced indoors, I enjoy knitting, reading, and a couple television shows, including Westworld and Silicon Valley. I am also a Graduate Resident Tutor, where I enjoy cooking and baking for my residents.