Zhening Li
PhD student at MIT CSAIL
· AI for math and science
· Neurosymbolic reasoning
· PL for LLM agents
E-mail: zli11010@mit.edu     X (Twitter): @zli11010
Google Scholar       LinkedIn       GitHub
Welcome! I’m Zed, a first-year PhD student at MIT CSAIL advised by Prof. Armando Solar-Lezama. Previously, I completed my undergrad and MEng at MIT, majoring in computer science and physics. My main research interests are machine learning and formal methods for math and science. I am also interested in programming frameworks for LLM-based agents. I am generously supported by the MIT Presidential Fellowship.
Publications
Zhening Li,
Armando Solar-Lezama,
Yisong Yue,
Stephan Zheng
EnCompass: Enhancing Agent Programming with Search Over Program Execution Paths
EnCompass: Enhancing Agent Programming with Search Over Program Execution Paths
In
NeurIPS,
2025.
EnCompass is an inference-time strategy framework for LLM-based agents written in Python. In EnCompass, "branchpoint()" causes the program's execution to split into multiple parallel branches of execution, and EnCompass searches over the resulting tree of possible execution paths of the program. By providing a unifying framework for interence-time strategies for AI agents, EnCompass enables easy experimentation of different scaling strategies and facilitates the discovery of better scaling laws.
Zhening Li,
Gabriel Poesia,
Armando Solar-Lezama
When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions
When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions
In
ICML,
2024.
Focusing on deterministic sparse-reward environments, we show theoretically and empirically that RL performance gain from skills (temporal abstractions) is worse in environments where solutions to states are less compressible (in the information-theoretic sense). Further theoretical results suggest that skills benefit exploration more than they benefit learning from existing experience, and that using unexpressive skills such as macroactions may worsen RL performance.
Yujie Qian,
Zhening Li,
Zhengkai Tu,
Connor W Coley,
Regina Barzilay
Predictive Chemistry Augmented with Text Retrieval
Predictive Chemistry Augmented with Text Retrieval
In
EMNLP,
2023.
TextReact augments predictive chemistry with texts retrieved from the literature. For a given chemistry input, the retrieval model retrieves relevant texts, and the predictor uses both the original input and retrieved texts to output predictions. On reaction condition recommendation and one-step retrosynthesis, TextReact outperforms state-of-the-art deep-learning chemistry models by 58.4% and 13.6–15.7%, respectively.
Yujie Qian,
Jiang Guo,
Zhengkai Tu,
Zhening Li,
Connor W Coley,
Regina Barzilay
MolScribe: Robust Molecular Structure Recognition with Image-to-Graph Generation
MolScribe: Robust Molecular Structure Recognition with Image-to-Graph Generation
In
Journal of Chemical Information and Modeling,
2023.
MolScribe translates molecular diagrams in image format to a structured graph format. It explicitly predicts atoms and bonds along with their positions, and flexibly incorporates symbolic chemistry constraints to recognize chirality and expand abbreviated structures. MolScribe achieves 76–93% accuracy on public benchmarks.
Zhening Li*,
Gabriel Poesia*,
Omar Costilla-Reyes,
Noah Goodman,
Armando Solar-Lezama
LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions
LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions
In
MATH-AI Workshop at NeurIPS,
2022.
Learning Mathematical Abstractions (LEMMA) learns a hierarchy of abstractions to enhance RL for mathematical reasoning. It augments Expert Iteration with an abstraction step, where solutions found so far are rewritten in terms of new higher-level actions, which then become available to solve new problems. LEMMA increases the performance and generalization of an RL agent on equation solving and fraction simplification.
Cite EnCompass: Enhancing Agent Programming with Search Over Program Execution Paths
@inproceedings{li2025encompass,
title={{EnCompass}: Enhancing Agent Programming with Search Over Program Execution Paths},
author={Li, Zhening and Solar-Lezama, Armando and Yue, Yisong and Zheng, Stephan},
booktitle={Conference on Neural Information Processing Systems},
year={2025}
}
Cite When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions
@inproceedings{li2022rlskilltheory,
title={When Do Skills Help Reinforcement Learning? A Theoretical Analysis of Temporal Abstractions},
author={Li, Zhening and Poesia, Gabriel and Solar-Lezama, Armando},
booktitle={Proceedings of the 41st International Conference on Machine Learning},
year={2024}
}
Cite Predictive Chemistry Augmented with Text Retrieval
@inproceedings{qian2023textreact,
title={Predictive Chemistry Augmented with Text Retrieval},
author={Qian, Yujie and Li, Zhening and Tu, Zhengkai and Coley, Connor and Barzilay, Regina},
booktitle={Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
pages={12731--12745},
year={2023}
}
Cite MolScribe: Robust Molecular Structure Recognition with Image-to-Graph Generation
@article{qian2023molscribe,
title={{MolScribe}: Robust Molecular Structure Recognition with Image-to-Graph Generation},
author={Qian, Yujie and Guo, Jiang and Tu, Zhengkai and Li, Zhening and Coley, Connor W and Barzilay, Regina},
journal={Journal of Chemical Information and Modeling},
volume={63},
number={7},
pages={1925--1934},
year={2023},
publisher={ACS Publications}
}
Cite LEMMA: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions
@inproceedings{li2022lemma,
title={{LEMMA}: Bootstrapping High-Level Mathematical Reasoning with Learned Symbolic Abstractions},
author={Li, Zhening and Poesia, Gabriel and Costilla-Reyes, Omar and Goodman, Noah and Solar-Lezama, Armando},
booktitle={MATH-AI Workshop at NeurIPS},
year={2022}
}