You're the th visitor.
Last Update: November 08, 2023.
(I've recently moved my homepage to this location, so please bear with me as I work out any bugs or issues that may arise.)
Part 3.2: Why do LLMs need Chain of Thoughts even for basic questions (e.g. was Biden born on an even day)? We show that LLMs cannot efficiently manipulate knowledge even if such knowledge is 100% extractable; + inverse knowledge search is just impossible. https://t.co/Cyrpy0jlKF pic.twitter.com/bFBXwgWriZ
— Zeyuan Allen-Zhu (@ZeyuanAllenZhu) September 27, 2023
My current research focuses on investigating the physics of language models and AI in a broader sense. This involves designing experiments to elucidate the underlying fundamental principles governing how transformers/GPTs learn to accomplish diverse AI tasks. By probing into the neurons of the pre-trained transformers, my goal is to uncover and comprehend the intricate (and sometimes surprising!) physical mechanisms behind large language models. Our first paper in this series can be found on arxiv.
Before that, I work on the mathematics of deep learning. That involves developing rigorous theoretical proofs towards the learnability of neural networks, in ideal and theory-friendly settings, to explain certain mysterious phenomena observed in deep learning. In this area, our paper on ensemble / knowledge distillation received some award from ICLR'23; although I am most proud of our COLT'23 result that provably shows why deep learning is actually deep –– better than shallow learners such as layer-wise training, kernel methods, etc.
In my past life, I have also worked in machine learning, optimization theory, and theoretical computer science.
In algorithm competitions, I was fortunate to win a few awards in my past life, including two IOI gold medals, a USACO world champion, an ACM/ICPC world-final gold medal, a Google Codejam world runner-up, and a USA MCM Top Prize.
In research, I used to be supported by a Microsoft Young Fellow Award, a Simons Student Award and a Microsoft Azure Research Award.
For a full list, click here.