Ilya Kostrikov
I'm a Postdoctoral Scholar at UC Berkeley Artificial Intelligence Research lab, where I work on Deep Reinforcement Learning. In particular, I'm interested in sample efficiency, Offline RL, and finetuning after Offline RL pretraining.
I received my PhD from NYU, where I worked on sample efficient imitation and reinforcement learning. I did an internship at Facebook AI Research, several internships at Google Brain, and I participated in Google Student Research Advising Program.
Email / Google Scholar / GitHub / Twitter
Representative publications
Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
R. Raileanu, M. Goldstein, D. Yarats, I. Kostrikov, R. Fergus
NeurIPS 2021
Offline Reinforcement Learning with Fisher Divergence Critic Regularization
I. Kostrikov, J. Tompson, R. Fergus, O. Nachum
ICML 2021
Image Augmentation is All You Need: Regularizing Deep Reinforcement Learning from Pixels
I. Kostrikov*, D. Yarats*, R. Fergus
ICLR 2021, Spotlight
I. Kostrikov, K. Agrawal, D. Dwibedi, S. Levine, J. Tompson
ICLR 2019