Currently, I'm a Research Scientist at OpenAI.
I was a Postdoctoral Scholar at UC Berkeley Artificial Intelligence Research lab, where I work on Deep Reinforcement Learning. In particular, I'm interested in sample efficient reinforcement learning.
I received my PhD from NYU, where I worked on sample efficient imitation and reinforcement learning. I did an internship at Facebook AI Research, several internships at Google Brain, and I participated in Google Student Research Advising Program.
A Walk in the Park: Learning to Walk in 20 Minutes With Model-Free Reinforcement Learning
L. Smith*, I. Kostrikov*, S. Levine
Offline Reinforcement Learning for Natural Language Generation with Implicit Language Q Learning
C. Snell, I. Kostrikov, Y. Su, M. Yang, S. Levine
Automatic Data Augmentation for Generalization in Deep Reinforcement Learning
R. Raileanu, M. Goldstein, D. Yarats, I. Kostrikov, R. Fergus
Offline Reinforcement Learning with Fisher Divergence Critic Regularization
I. Kostrikov, J. Tompson, R. Fergus, O. Nachum
Image Augmentation is All You Need: Regularizing Deep Reinforcement Learning from Pixels
I. Kostrikov*, D. Yarats*, R. Fergus
ICLR 2021, Spotlight
Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning
I. Kostrikov, K. Agrawal, D. Dwibedi, S. Levine, J. Tompson