I'm a research scientist at Google DeepMind working on Gemini.
I received my PhD at UT Austin under Prof. Alex Dimakis.
Prior to that, I studied CS/math at USC and Stanford, and also spent a few years at YouTube training classifiers and building backend infrastructure.
My research interests lie broadly in deep generative modeling with the goal of enabling it to work at scale.
In particular, I’m interested in using likelihood-based models to perform useful downstream tasks such as image/video generation, inverse problems and compression.
A technique to distributionally solve inverse problems with a pretrained normalizing flow model as a prior by composing it with another learned flow model.
A model-based RL technique that improves sample efficiency in hard exploration environments through optimistic exploration and fast learning via object representation.