Pavel Izmailov
Contact: pi390@nyu.edu, Twitter
I am a Researcher at Anthropic. I am primarily interested in reinforcement learning, reasoning, AI for science and AI alignment.
Starting in Fall 2025, I will be joining NYU as an Assistant Professor in the Tandon CSE department, and Courant CS department by courtesy. I am also a member of the NYU CILVR Group.
Previously, I worked on reasoning and superintelligent AI alignment at OpenAI.
I am hiring PhD students to work with me at NYU starting Fall 2025. Please apply to the PhD program in the CSE department (deadline on December 1) or the CS department (deadline on December 12) and mention my name in your application. You are welcome to email me at pavel.recruiting@gmail.com with your CV and short description of your research interests. Admissions happen through a centralized committee.
Due to a high volume of applications, I will be unable to respond to all emails! Please do not be discouraged if you do not hear back from me.
My research interests are broadly in understanding how deep neural networks work. I am excited about a broad array of topics in core machine learning, including:
- • Problem-solving and reasoning in AI
- • Reinforcement learning, planning and search
- • Interpretability of deep learning models
- • AI for scientific discovery and math
- • Generalization and robustness of AI models
- • Technical AI alignment
- • Probabilistic deep learning, uncertainty estimation and Bayesian methods
Recent Highlights
I contributed to the recent OpenAI o1 models, a new state-of-the-art in LLM reasoning. Our work on weak-to-strong generalization was covered by a WIRED, MIT Technology Review and others. Our work on Bayesian model selection was recognized with an Outstanding Paper Award 🏆 at ICML 2022!
Links
- [Home, Bio, Publications, Talks, CV, GitHub, Google Scholar, Semantic Scholar]
Selected Papers
-
*Equal first authorship. Full list of papers available here.
-
Learning to Reason with LLMs
2024
[OpenAI blog] -
Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision
2023
[PDF, ArXiv, OpenAI blog, Code] [WIRED, TechCrunch, MIT Technology Review, IEEE Spectrum] -
FlexiViT: one model for all patch sizes
Conference on Computer Vision and Pattern Recognition (CVPR), 2023
[PDF, ArXiv, Code] -
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
International Conference on Learning Representations (ICLR), 2023 🌟 Spotlight Presentation
[PDF, ArXiv, Code] -
On Feature Learning in the Presence of Spurious Correlations
Neural Information Processing Systems (NeurIPS), 2022
[PDF, ArXiv, Code] -
Bayesian Model Selection, the Marginal Likelihood, and Generalization
International Conference on Machine Learning (ICML), 2022
🏆 Outstanding Paper Award, 📢 Long Talk (Oral)
[PDF, ArXiv, Code] -
Dangers of Bayesian Model Averaging under Covariate Shift
Neural Information Processing Systems (NeurIPS), 2021
[PDF, ArXiv, Poster, Code] -
What Are Bayesian Neural Network Posteriors Really Like?
International Conference on Machine Learning (ICML), 2021
📢 Long Talk (Oral)
[PDF, ArXiv, Code, HMC samples, Poster, NeurIPS competition] -
Why Normalizing Flows Fail to Detect Out-of-Distribution Data
Neural Information Processing Systems (NeurIPS), 2020
[PDF, ArXiv, Code] -
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Neural Information Processing Systems (NeurIPS), 2020
[PDF, ArXiv, Code] -
A Simple Baseline for Bayesian Uncertainty in Deep Learning
Neural Information Processing Systems (NeurIPS), 2019
[PDF, ArXiv, Code, Poster, Video] -
Averaging Weights Leads to Wider Optima and Better Generalization
Uncertainty in Artificial Intelligence (UAI), 2018
📢 Oral Presentation
[PDF, ArXiv, Code, Poster, Slides, PyTorch Blogpost, Towards Data Science Blogpost, fast.ai Blogpost] -
Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs
Neural Information Processing Systems (NeurIPS), 2018
🌟 Spotlight Presentation
[PDF, ArXiv, Code, Poster, Slides, Video, Blogpost]