Who am I?
My name is Tzu-Mao Li. I am an incoming assistant professor in UC San Diego's Computer Science & Engineering department. Check out my webpage if you haven't visited it yet.
For prospective students/postdocs
I am recruiting!
If you want to work with me as a graduate student, please apply through the UCSD CSE department
, and mention me in your application materials. If you are interested in a postdoc position, please
send me an email (firstname.lastname@example.org). You don't have to send me an email if you are applying to UCSD as a graduate student,
unless you have some specific questions. I am happy to answer questions regarding the application and research.
I would like to see the following in your application materials
- What are your research interests? Why?
- How does your background support your interests?
- How do you see your research field advancing? What are the interesting problems you want to solve? What are your plans?
- How do your interests align with mine? What are your thoughts on solving the problems I list below?
Be concrete and clear. Give examples. Answer the "Why" and "What" of the questions.
Ideally you want to show a non-trivial, mature view of research fields instead of just throwing buzzwords.
Ideally you want to have a concrete proposal of a research direction, but it is fine if it is vague (you don't want me
to steal your idea anyway).
I do not expect you to answer these questions perfectly. I probably can't answer them perfectly myself either. Try your best.
I do not expect students to have prior experience with computer graphics or programming languages (it is a plus if
you have some). However, make sure you are enthusiastic about at least a subset of them (for example, the topics I
list below). I do expect students to have solid mathematics skills and/or software engineering skills. Ideally the
students should have some expertise that I don't have, so we can learn from each other.
I am not entirely sure about my advising style yet. However, I find myself generally agree with Toshiya Hachisuka's Graduate Study Survival Guide
. I may elaborate on this part later. Let me know if you have any specific questions.
What's my research?
Check out my webpage
for publications and notes.
I work on the interactions between computer graphics, vision, programming systems, and machine learning.
My main research direction is something I call "differentiable visual computing,"
where we go beyond neural networks and backpropagate through computer graphics programs.
In graphics, we explicitly model how the world behaves (often through physics), instead of relying on generic
neural networks to learn everything from scratch. Learning from data by backpropagating through graphics programs
allows us to have more control and better understanding of
our programs' behavior, enables high-performance, and makes debugging/verification easier.
The applications include: helping self-driving cars to make better decisions, training robots to interact with
the environment using physical information, creating more realistic virtual realities, designing buildings and
rooms to have better lighting, designing 3D physical objects with desired appearance and functionalities,
reconstructing 3D structures of cells from microscope images, and allowing movie artists to produce better film shots.
Differentiating general graphics programs correctly and efficiently is more difficult than differentiating a convolution layer,
due to the general computation patterns and the mathematics involved. I derive algorithms that
differentiate graphics programs while taking discontinuities into account. I design compilers that
explore trade-offs of derivative computation and generate correct and efficient code. I look at applications
where these differentiable graphics programs can be useful.
Many graphics researchers are trying to replace graphics pipelines with deep learning components. While this is
a cool direction and is in scope of my research, one of my main directions is to introduce differentiable
graphics programs to deep learning pipelines. I claim that this is the way that will lead us to
controllable, interpretable, robust, and efficient models. I am also highly interested in non-data-driven settings,
where we try to figure out latent variables solely based on our knowledge of the physical process.
Following are some of my current research thrusts:
How do we backpropagate through the rendering equation
, so that we can infer 3D
information from 2D observations? How do we handle discontinuities, and how do we make the differentiation as
efficient as possible? How do we differentiate rendering with arbitrary geometry and material representations, while
modelling physical phenomona such as occlusion, surface and volumetric scattering, dispersion, and diffraction? Ultimately, we want to build a differentiable renderer that can generate and differentiate noise-free
million-pixel-images with billions of varied primitives within seconds, while accurately modelling optics. We want to
use the renderer for artificial intelligence agents to inference 3D information, for reconstructing detailed 3D models
for virtual/augmented reality, for designing and fabricating real-world 3D objects with desired optical properties, for
designing imaging systems, and for analyzing biomedical data using inverse optics.
Differentiable image processing: How do we design image processing pipelines that can learn from data,
can process images with tens or hundreds of megapixels, in real-time, on mobile devices? I claim that instead of
stacking more convolution layers and increasing overparametrization, we should generalize the programming model we use
for designing image processing algorithms. Instead of the high arithmetic intensity deep learning layers, we want to
take building blocks from more traditional image processing algorithms, parametrize them, and differentiate through them
to learn the parameters. To achieve this we need better compilers that can fuse the low arithmetic intensity
computation, and can differentiate array code.
Differentiable physics simulation:
How do we backpropagate through ODE/PDE solvers, so that we can make
inference based on the dynamics? This problem is studied in the optimal control
and the sensitivity
(and this is also what inspired
However, differentiation of
discontinuities and boundary
conditions in ODEs/PDEs is not very
well-understood. Furthermore, how do we
efficiently map these computation to modern
hardware? How do we reduce memory usage when
backpropagating an iterative solver? Answering
these questions will enable us to train robot
controllers orders of magnitiude more efficiently, design 3D objects with physical constraints, or even have more elaborated epidemiology models
Domain-specific compilers for differentiable computer graphics: How do we design compilers that take
high-level computer graphics code (rendering, image processing, simulation, geometry processing), and automatically
output high-performance low-level code along with the derivative code? How do we certify the correctness? Just like how deep learning frameworks
democratize machine learning, I want to build an easy-to-use programmable differentiable graphics system that makes building
differentiable graphics pipelines as simple as training a MNIST classifier in PyTorch, while generating reliable code.
Accelerating physically-based rendering: Physically-based rendering is known to be time-consuming, due to its need to compute multi-dimensional integrals using Monte Carlo sampling. How do we make it faster? I believe there are two keys towards the ultimate rendering algorithm: 1) re-using Monte Carlo samples through statistical analysis. 2) replacing heuristics with data-driven components.
Beyond differentiation: Can we derive or automatically find *something* that approximate a function locally, and better inform
our inference algorithms, when comparing to derivatives? For example, Fourier analysis or wavelet analysis give us strictly more information compared
to derivatives (differentiation is a linear ramp in the frequency domain). How can we use them in optimization algorithms?
Can we find something that scales better with dimensionality compared to Fourier analysis? Can we develop systems that
help us find these quantities given a program?
related works of mine: Nothing here yet. Hopefully your work will be listed here!