Jay Whang, Erik M. Lindgren, Alexandros G. Dimakis.
Approximate Probabilistic Inference with Composed Flows
arXiv preprint.
Short version accepted to NeurIPS 2020 Workshop on Deep Learning and Inverse Problems
Jay Whang, Qi Lei, Alexandros G. Dimakis.
Compressed Sensing with Invertible Generative Models and Dependent Noise
arXiv preprint.
Short version accepted to NeurIPS 2020 Workshop on Deep Learning and Inverse Problems
Rui Shu, Hung Bui, Jay Whang, Stefano Ermon.
Training Variational Autoencoders with Buffered Stochastic Variational Inference.
The 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
Ramtin Keramati*, Jay Whang*, Patrick Cho* and Emma Brunskill.
Fast Exploration with Simplified Models and Approximately Optimistic Planning in Model Based Reinforcement Learning
arXiv preprint.
Ramtin Keramati*, Jay Whang*, Patrick Cho* and Emma Brunskill.
Strategic Exploration in Object-Oriented Reinforcement Learning.
The 35th International Conference on Machine Learning (ICML) Workshop on Exploration in RL, 2018.
Daniel De Freitas Adiwardana*, Akihiro Matsukawa*, Jay Whang*.
Using Generative Models for Semi-Supervised Learning.
Technical Report.
Jay Whang*, Akihiro Matsukawa*.
Exploring Batch Normalization in Recurrent Neural Networks.
Technical Report.
(* equal contribution)