Lu Mi   米璐

I am currently a Shanahan Foundation Fellow at the Allen Institute for Brain Science, and postdoctoral researcher at the University of Washington, where I work closely with Dr. Uygar Sümbül, Prof. Matthew Golub, Prof. Edgar Walker, and Prof. Eric Shea-Brown. Prior to that, I received my Ph.D. at MIT CSAIL in 2022, where I studied in Computational Connectomics Group. I was fortunate to be advised by Prof. Nir Shavit at MIT, and co-advised by Prof. Aravinthan D.T. Samuel and Prof. Jeff W. Lichtman at Harvard Center for Brain Science. Meanwhile, I was a visiting researcher at HHMI Janelia Research Campus working with Dr. Srinivas C. Turaga. I also held research internship at Google and Waymo.

I received my M.S. degree at MIT EECS in 2019, and B.S. degree at Tsinghua in 2017.

Email  /  Google Scholar  /  Twitter  /  CV

profile photo
Research

My research interest is NeuroAI, which studies the intersection of natural intelligence and artificial intelligence. During my PhD, I developed advanced deep learning tools for fast and scalable automatic connectomics pipelines to discover the brain and link the anatomical structure and neural activity with whole-brain modeling. In the current collaboration with investigators at the Allen Institute and the University of Washington, I am developing interpretable deep learning approaches including probabilistic modeling and representation learning for biological and artificial systems. My research interests broadly include
(1) Developing efficient and scalable automatic pipelines to discover the brain;
(2) Modeling the brain with multi-modal neural data;
(3) Understanding the robustness and efficiency of coding, computation, and learning in biological and artificial systems;
(4) Building brain-inspired AI frameworks.


I am currently on the academic job market. Meanwhile, I'm looking for collaborations with motivated students who are passionate about NeuroAI. Drop me an email if you are interested!

profile photo
News
Selected Publications
       * indicates equal contribution
Learning Time-Invariant Representations for Individual Neurons from Population Dynamics
Lu Mi*, Trung Le*, Tianxing He, Eli Shlizerman, Uygar Sümbül
NeurIPS 2023

We use implicit dynamical models to learn time-invariant representation for individual neurons from population dynamics and enable mapping functional activity to cell types.

paper / slides

Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity
Lu Mi, Richard Xu, Sridhama Prakhya, Albert Lin, Nir Shavit, Aravinthan D.T. Samuel, Srinivas C. Turaga
ICLR 2022

We use stochastic threshold linear dynamics to model the whole-brain neural network, using connectome constraint on the neural activity.

paper / code

Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate
Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir Shavit
AAAI 2022
ICML 2021 Workshop on Uncertainty & Robustness in Deep Learning

We perform a systematic exploration into training-free uncertainty estimation for dense regression, an unrecognized yet important problem, and provide a theoretical construction justifying such estimations.

paper / video / slides / poster

HDMapGen: A Hierarchical Graph Generative Model of High Definition Maps
Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov
CVPR 2021

We propose HDMapGen, a hierarchical graph generative model capable of producing high-quality and diverse HD maps.

paper / video / slides / poster

Learning Guided Electron Microscopy with Active Acquisition
Lu Mi, Hao Wang, Yaron Meirovitch, Richard Schalek, Srinivas C. Turaga, Jeff W. Lichtman, Aravinthan D. T. Samuel, Nir Shavit
MICCAI 2020

We show how to use deep learning to accelerate and optimize single-beam SEM acquisition of images.

paper

Cross-Classification Clustering: An Efficient Multi-Object Tracking Technique for 3-D Instance Segmentation in Connectomics
Yaron Meirovitch*, Lu Mi*, Hayk Saribekyan, Alexander Matveev, David Rolnick, Nir Shavit
CVPR 2019

We introduce cross-classification clustering (3C), a technique that simultaneously tracks complex, interrelated objects in an image stack.

paper

Invited Talks and Services
  • Spotlight Talk on Bridge the Gap between Biological and Artificial Neural Networks, Michigan AI Symposium, 2023.
  • Spotlight Talk on Neuronal Time-Invariant Representations, NeuroAI in Montreal, 2023.
  • Guest lecture on Biophysics VAEs, Deep Learning for Neuroscience, CSE 599N, University of Washington, 2023.
  • Talk on How to Link Multi-Modal Neural Data with Deep Learning, NeuroAI in Seattle, 2022.
  • Talk on Deep Learning Tools for Next-Generation Connectomics, Allen Institute, 2022.
  • Talk on Connectome-Constrained Modeling, Computational Neuroscience Seminar, Flatrion Institute, 2022.
  • Talk on Connectome-Constrained Modeling, CVML meeting, HHMI Janelia Research Campus, 2021.
  • Talk on Cross-Classification Clustering Segmentation, Machine Learning & Biology NSF Workshop, 2019.
  • Reviewer for NeurIPS, ICLR, ICML, CVPR, ECCV, ICCV, AAAI.
  • TA for Summer Workshop on the Dynamic Brain, 2023.
  • TA for MIT 6.555 (Biomedical Signal and Image Processing) by Julie Greenberg, 2019.
Honors and Awards
  • Most Creative Application of AI, Michigan AI Symposium, 2023
  • Shanahan Foundation Fellowship, Allen Institute & University of Washington, Seattle, 2022
  • Rising Stars in EECS, 2022
  • MathWorks Fellowship, MIT EECS, 2021
  • NIH Awards, MICCAI, 2020
  • Grass Instruments Co. Fellowship, MIT EECS, 2017
  • Tang Lixing Fellowship, Tsinghua (30 in 3000), 2017
  • Best Paper Awards, Optofluidics, 2016
  • 1st Prize (Meritorious Winner) COMAP's Mathematical Contest in Modeling (MCM), 2016

The design and code of this website is adapted from Jon Barron's site.