Zhening Li
PhD student at MIT CSAIL
· AI for math and science
· Neurosymbolic reasoning
· PL for LLM agents
E-mail: zli11010@mit.edu     X (Twitter): @zli11010
Google Scholar       LinkedIn       GitHub
Welcome! I’m Zed, a first-year PhD student at MIT CSAIL advised by Prof. Armando Solar-Lezama. Previously, I completed my undergrad and MEng at MIT, majoring in computer science and physics. My main research interests are machine learning and formal methods for math and science. I am also interested in programming frameworks for LLM-based agents. I am generously supported by the MIT Presidential Fellowship.
Select publications
Zhening Li,
Armando Solar-Lezama,
Yisong Yue,
Stephan Zheng
EnCompass: Enhancing Agent Programming with Search Over Program Execution Paths
EnCompass: Enhancing Agent Programming with Search Over Program Execution Paths
In
NeurIPS,
2025.
EnCompass is a programming framework for adding inference-time scaling to general programs containing LLM calls. While the Python interpreter executes a program linearly from start to finish, EnCompass can backtrack to a previous location in the program and fork the runtime into multiple parallel copies, searching for the best outcome using a user-specified search strategy.
Zhening Li,
Gabriel Poesia,
Armando Solar-Lezama
When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions
When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions
In
ICML,
2024.
Focusing on deterministic sparse-reward environments, we show theoretically and empirically that RL performance gain from skills (temporal abstractions) is worse in environments where solutions to states are less compressible in the information-theoretic sense. Further theoretical results show that using unexpressive skills such as macroactions provably worsen RL performance in certain environments.
Yujie Qian,
Zhening Li,
Zhengkai Tu,
Connor W Coley,
Regina Barzilay
Predictive Chemistry Augmented with Text Retrieval
Predictive Chemistry Augmented with Text Retrieval
In
EMNLP,
2023.
TextReact augments predictive chemistry with texts retrieved from the literature. For a given chemistry input, the retrieval model retrieves relevant texts, and the predictor uses both the original input and retrieved texts to output predictions. On reaction condition recommendation and one-step retrosynthesis, TextReact outperforms state-of-the-art deep-learning chemistry models by 58.4% and 13.6–15.7%, respectively.
Zhening Li*,
Gabriel Poesia*,
Omar Costilla-Reyes,
Noah Goodman,
Armando Solar-Lezama
LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions
LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions
In
MATH-AI Workshop at NeurIPS,
2022.
LEMMA learns a hierarchy of abstractions to enhance RL for mathematical reasoning. It augments Expert Iteration with an abstraction step, where solutions found so far are rewritten in terms of new higher-level actions, which then become available to solve new problems. LEMMA increases the performance and generalization of an RL agent on equation solving and fraction simplification.
Cite EnCompass: Enhancing Agent Programming with Search Over Program Execution Paths
@inproceedings{li2025encompass,
title={{EnCompass}: Enhancing Agent Programming with Search Over Program Execution Paths},
author={Li, Zhening and Solar-Lezama, Armando and Yue, Yisong and Zheng, Stephan},
booktitle={Conference on Neural Information Processing Systems},
year={2025}
}
Cite When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions
@inproceedings{li2022rlskilltheory,
title={When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions},
author={Li, Zhening and Poesia, Gabriel and Solar-Lezama, Armando},
booktitle={Proceedings of the 41st International Conference on Machine Learning},
year={2024}
}
Cite Predictive Chemistry Augmented with Text Retrieval
@inproceedings{qian2023textreact,
title={Predictive Chemistry Augmented with Text Retrieval},
author={Qian, Yujie and Li, Zhening and Tu, Zhengkai and Coley, Connor and Barzilay, Regina},
booktitle={Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
pages={12731--12745},
year={2023}
}
Cite LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions
@inproceedings{li2022lemma,
title={{LEMMA}: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions},
author={Li, Zhening and Poesia, Gabriel and Costilla-Reyes, Omar and Goodman, Noah and Solar-Lezama, Armando},
booktitle={MATH-AI Workshop at NeurIPS},
year={2022}
}