Lu Mi   米璐

I am currently a Shanahan Foundation Fellow at the Allen Institute, and postdoctoral researcher at the University of Washington, Seattle, where I work closely with Dr. Uygar Sümbül, Prof. Matthew Golub, Prof. Edgar Walker, and Prof. Eric Shea-Brown. Prior to that, I received my Ph.D. at MIT CSAIL in 2022, where I studied in Computational Connectomics Group. I was fortunate to be advised by Prof. Nir Shavit at MIT, and co-advised by Prof. Aravinthan D.T. Samuel and Prof. Jeff W. Lichtman at Harvard Center for Brain Science. Meanwhile, I was a visiting researcher at HHMI Janelia Research Campus working with Dr. Srinivas C. Turaga. I also held research internship at Google and Waymo.

I received my M.S. degree at MIT EECS in 2019, and B.S. degree at Tsinghua in 2017.

Email  /  Google Scholar  /  Twitter  /  CV

profile photo
Research

My research interest is NeuroAI, which studies the intersection of natural intelligence and artificial intelligence. During my PhD, I developed advanced deep learning tools for fast and scalable automatic connectomics pipelines to discover the brain, and linking the anatomical structure and neural activity with whole-brain modeling. In the current collaboration with investigators at the Allen Institute and the University of Washington, I am developing interpretable deep learning approaches including probabilistic modeling and representation learning for biological and artificial systems. My research topics broadly include
(1) Developing fast and scalable automatic pipelines to discover the brain;
(2) Modeling brain with multi-modal neural data;
(3) Understanding the robustness and efficiency of coding, computation and learning in biological and artificial systems;
(4) Building brain-inspired AI frameworks.


I am currently on academia job market in fall/winter 2023. Meanwhile, I'm looking for collaborations with motivated students who are passionate about NeuroAI. Drop me an email if you are interested!

News
Selected Publications
       * indicates equal contribution
Learning Time-Invariant Representations for Individual Neurons from Population Dynamics
Lu Mi*, Trung Le*, Tianxing He, Eli Shlizerman, Uygar Sümbül
NeurIPS 2023

We use implicit dynamical models to learn time-invariant representation for individual neurons from population dynamics and enables mapping functional activity to transcriptomic identity.

Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity
Lu Mi, Richard Xu, Sridhama Prakhya, Albert Lin, Nir Shavit, Aravinthan D.T. Samuel, Srinivas C. Turaga
ICLR 2022

We use stochastic threshold linear dynamics to model the whole-brain neural network, using connectome constraint on the neural activity.

pdf / code

Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate
Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir Shavit
AAAI 2022
ICML 2021 Workshop on Uncertainty & Robustness in Deep Learning

We perform a systematic exploration into training-free uncertainty estimation for dense regression, an unrecognized yet important problem, and provide a theoretical construction justifying such estimations.

arxiv / video / slides / poster

HDMapGen: A Hierarchical Graph Generative Model of High Definition Maps
Lu Mi, Hang Zhao, Charlie Nash, Xiaohan Jin, Jiyang Gao, Chen Sun, Cordelia Schmid, Nir Shavit, Yuning Chai, Dragomir Anguelov
CVPR 2021

We propose HDMapGen, a hierarchical graph generative model capable of producing high-quality and diverse HD maps.

pdf / video / slides / poster

Learning Guided Electron Microscopy with Active Acquisition
Lu Mi, Hao Wang, Yaron Meirovitch, Richard Schalek, Srinivas C. Turaga, Jeff W. Lichtman, Aravinthan D. T. Samuel, Nir Shavit
MICCAI 2020

We show how to use deep learning to accelerate and optimize single-beam SEM acquisition of images.

arxiv

Cross-Classification Clustering: An Efficient Multi-Object Tracking Technique for 3-D Instance Segmentation in Connectomics
Yaron Meirovitch*, Lu Mi*, Hayk Saribekyan, Alexander Matveev, David Rolnick, Nir Shavit
CVPR 2019

We introduce cross-classification clustering (3C), a technique that simultaneously tracks complex, interrelated objects in an image stack.

pdf

Professional Services
  • Reviewer for NeurIPS, ICLR, CVPR, ECCV, ICCV, AAAI.
  • Guest lecture on Biophysics VAEs, Deep Learning for Neuroscience, CSE 599N, University of Washington, Seattle, 2023.
  • Talk on How to Link Multi-Modal Neural Data with Deep Learning, NeuroAI in Seattle, 2022.
  • Talk on Deep Learning Tools for Next-Generation Connectomics, Allen Institute, 2022.
  • Talk on Connectome-Constrained Modeling, Computational Neuroscience Seminar, Flatrion Institute, 2022.
  • Talk on Connectome-Constrained Modeling, CVML meeting, HHMI Janelia Research Campus, 2021.
  • Talk on Cross-Classification Clustering Segmentation, Machine Learning & Biology NSF Workshop, 2019.
  • TA for MIT 6.555 (Biomedical Signal and Image Processing) by Julie Greenberg, 2019.
Honors and Awards
  • Shanahan Foundation Fellowship, Allen Institute & University of Washington, Seattle, 2022
  • Risng Stars in EECS, 2022
  • MathWorks Fellowship, MIT EECS, 2021
  • NIH Awards, MICCAI, 2020
  • Grass Instruments Co. Fellowship, MIT EECS, 2017
  • Tang Lixing Fellowship, Tsinghua (30 in 3000), 2017
  • Best Paper Awards, Optofluidics, 2016
  • 1st Prize (Meritorious Winner) COMAP's Mathematical Contest in Modeling (MCM), 2016

The design and code of this website is adapted from Jon Barron's site.