Behrooz Tahmasebi
Postdoctoral Fellow in Applied Mathematics and Computer Science
Geometric Machine Learning Group, Harvard University
Advisor: Prof. Melanie Weber
Ph.D. in EECS from MIT CSAIL (Advisor: Prof. Stefanie Jegelka).
CV | Google Scholar | Email: firstname_lastname at seas dot harvard dot edu
Research
My research interests lie at the intersection of geometric machine learning, including symmetries, manifolds, and graphs, deep learning theory, and the foundations of large language models. From an applied mathematics perspective, I am also interested in applied group representation theory, harmonic analysis, spectral theory of manifolds, and differential geometry, and their connections to machine learning and statistics.
For more about this research direction, see my recent NeurIPS 2025 tutorial on geometric machine learning.
Service
Reviewer for conferences: NeurIPS, ICML, ICLR, AISTATS, AAAI, IEEE ISIT
Reviewer for journals: IEEE Transactions on Information Theory, IEEE Transactions on Neural Networks and Learning Systems, Information and Inference: A Journal of the IMA.
Area Chair: ICML (2026–), NeurIPS (2026–).
Tutorial
Recent Developments in Geometric Machine Learning: Foundations, Models, and More, NeurIPS 2025.
Media Coverage
- New algorithms enable efficient machine learning with symmetric data, MIT News, 2025.
- How symmetry can come to the aid of machine learning, MIT News, 2024.
Publications
* denotes equal contribution.
-
When Does Invariant Score Matching Transfer Across Dimensions?
Behrooz Tahmasebi and Melanie Weber.
Submitted (under review), 2026. -
Spontaneous Symmetry Breaking via Regularized
Optimization: Hardness and Approximation.
Ashkan Soleymani*, Behrooz Tahmasebi*, Reyhaneh Hosseinpour, Tess Smidt, Patrick Jaillet, and Stefanie Jegelka.
Submitted (under review), 2026. -
Sparse Data Augmentation for Optimization with Provable Guarantees.
Behrooz Tahmasebi and Melanie Weber.
Submitted (under review), 2026. -
Symmetries in Weight Space Learning: To Retain or Remove?
Fynn Kiwit*, Behrooz Tahmasebi*, and Stefanie Jegelka.
Submitted (under review), 2026. -
Data Augmentation: A Fourier Analysis Perspective.
Behrooz Tahmasebi, Melanie Weber, and Stefanie Jegelka.
COLT 2026; Oral presentation at the DeepMath Conference 2025. -
Efficient Learning and Symmetry Discovery under Exact Invariances.
Ashkan Soleymani*, Behrooz Tahmasebi*, Patrick Jaillet, and Stefanie Jegelka.
COLT 2026; Oral presentation at the NeurReps Workshop (NeurIPS 2025). -
Adaptive Symmetry Discovery for Dynamical System Identification.
Behrooz Tahmasebi and Melanie Weber.
ICML 2026. -
Achieving Approximate Symmetry Is Exponentially Easier than Exact Symmetry.
[arXiv]
Behrooz Tahmasebi and Melanie Weber.
ICLR 2026; Oral presentation at the TAG-DS Workshop 2025. -
Geometric Algorithms for Neural Combinatorial Optimization with Constraints.
[arXiv]
Nikolaos Karalias, Akbar Rafiey, Yifei Xu, Zhishang Luo, Behrooz Tahmasebi, Connie Jiang, and Stefanie Jegelka.
NeurIPS 2025. -
Learning with Exact Invariances in Polynomial Time.
[preprint]
Ashkan Soleymani*, Behrooz Tahmasebi*, Stefanie Jegelka, and Patrick Jaillet.
ICML 2025, Spotlight paper (top 2.6% of submissions). -
Generalization Bounds for Canonicalization: A Comparative Study with Group Averaging.
[conference]
[pdf]
Behrooz Tahmasebi and Stefanie Jegelka.
ICLR 2025. -
A Robust Kernel Statistical Test of Invariance: Detecting Subtle Asymmetries.
[pdf]
Ashkan Soleymani*, Behrooz Tahmasebi*, Stefanie Jegelka, and Patrick Jaillet.
AISTATS 2025, Oral presentation (top 2% of submissions). -
Regularity in Canonicalized Models: A Theoretical Perspective.
[pdf]
Behrooz Tahmasebi and Stefanie Jegelka.
AISTATS 2025. -
Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework.
[arXiv]
Parsa Moradi, Behrooz Tahmasebi, and Mohammad Ali Maddah-Ali.
NeurIPS 2024. -
A Universal Class of Sharpness-Aware Minimization Algorithms.
[arXiv]
[conference]
[workshop]
Behrooz Tahmasebi, Ashkan Soleymani, Dara Bahri, Stefanie Jegelka, and Patrick Jaillet.
ICML 2024. Best Paper Award, HiLD Workshop at ICML 2024. -
Sample Complexity Bounds for Estimating Probability Divergences under Invariances.
[arXiv]
Behrooz Tahmasebi and Stefanie Jegelka.
ICML 2024. -
The Exact Sample Complexity Gain from Invariances for Kernel Regression.
[arXiv]
Behrooz Tahmasebi and Stefanie Jegelka.
NeurIPS 2023, Spotlight paper (top 3.6% of submissions). -
The Power of Recursion in Graph Neural Networks for Counting Substructures.
[conference]
Behrooz Tahmasebi, Derek Lim, and Stefanie Jegelka.
AISTATS 2023, Oral presentation (top 1.9% of submissions). -
The Capacity of Associated Subsequence Retrieval.
[journal]
Behrooz Tahmasebi, Mohammad Ali Maddah-Ali, and Seyed Abolfazl Motahari.
IEEE Transactions on Information Theory, 2021. -
Private Function Computation.
Behrooz Tahmasebi and Mohammad Ali Maddah-Ali.
IEEE International Symposium on Information Theory, 2020. -
Private Sequential Function Computation.
[arXiv]
Behrooz Tahmasebi and Mohammad Ali Maddah-Ali.
IEEE International Symposium on Information Theory, 2019. -
Information Theory of Mixed Population Genome-Wide Association Studies.
Behrooz Tahmasebi, Mohammad Ali Maddah-Ali, and Seyed Abolfazl Motahari.
IEEE Information Theory Workshop, 2018. -
Genome-Wide Association Studies: Information Theoretic Limits of Reliable Learning.
Behrooz Tahmasebi, Mohammad Ali Maddah-Ali, and Seyed Abolfazl Motahari.
IEEE International Symposium on Information Theory, 2018. -
Optimum Transmission Delay for Function Computation in NFV-Based Networks: The Role of Network Coding and Redundant Computing.
Behrooz Tahmasebi, Mohammad Ali Maddah-Ali, Saeedeh Parsaeefard, and Babak Khalaj.
IEEE Journal on Selected Areas in Communications, 2018. -
On the Identifiability of Finite Mixtures of Finite Product Measures.
Behrooz Tahmasebi, Seyed Abolfazl Motahari, and Mohammad Ali Maddah-Ali.
Manuscript, 2018.