Ilya Kostrikov

Currently, I'm a Research Scientist at OpenAI.

I was a Postdoctoral Scholar at UC Berkeley Artificial Intelligence Research lab, where I work on Deep Reinforcement Learning. In particular, I'm interested in sample efficient reinforcement learning.

I received my PhD from NYU, where I worked on sample efficient imitation and reinforcement learning. I did an internship at Facebook AI Research, several internships at Google Brain, and I participated in Google Student Research Advising Program.

Representative publications

Offline Reinforcement Learning with Implicit Q-Learning

I. Kostrikov, A. Nair, S. Levine

ICLR 2022

Automatic Data Augmentation for Generalization in Deep Reinforcement Learning

R. Raileanu, M. Goldstein, D. Yarats, I. Kostrikov, R. Fergus

NeurIPS 2021

Offline Reinforcement Learning with Fisher Divergence Critic Regularization

I. Kostrikov, J. Tompson, R. Fergus, O. Nachum

ICML 2021

Imitation Learning via Off-Policy Distribution Matching

I. Kostrikov, O. Nachum, J. Tompson

ICLR 2020

Surface Networks

I. Kostrikov, Z. Jiang, D. Panozzo, D. Zorin, J. Bruna

CVPR 2018, Oral