Roshan Rao

I'm a third year PhD Student in CS at UC Berkeley. I work with John Canny and Pieter Abbeel on protein modeling with deep networks. I'm particularly interested in the use of semi-supervised learning for biological sequences, as well as for sequence modeling more generally.

All Papers

Evaluating Protein Transfer Learning with TAPE

Neural Information Processing Systems (NeurIPS) 2019. Spotlight.

Roshan Rao*,

Nicholas Bhattacharya*,

Neil Thomas*,

Yan Duan,

Xi Chen,

John Canny,

Pieter Abbeel,

Yun S. Song

Protein modeling is an increasingly popular area of machine learning research. Semi-supervised learning has emerged as an important paradigm in protein modeling due to the high cost of acquiring supervised protein labels, but the current literature is fragmented when it comes to datasets and standardized evaluation techniques. To facilitate progress in this field, we introduce the Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology. We curate tasks into specific training, validation, and test splits to ensure that each task tests biologically relevant generalization that transfers to real-life scenarios. We bench-mark a range of approaches to semi-supervised protein representation learning, which span recent work as well as canonical sequence learning techniques. We find that self-supervised pretraining is helpful for almost all models on all tasks, more than doubling performance in some cases. Despite this increase, in several cases features learned by self-supervised pretraining still lag behind features extracted by state-of-the-art non-neural techniques. This gap in performance suggests a huge opportunity for innovative architecture design and improved modeling paradigms that better capture the signal in biological sequences. TAPE will help the machine learning community focus effort on scientifically relevant problems. Toward this end, all data and code used to run these experiments are available at

GPU-Accelerated t-SNE and its Applications to Modern Data

High Performance Machine Learning (HPML) 2018. Outstanding Paper Award.

David Chan*,

Roshan Rao*,

Forrest Huang*,

John Canny

This paper introduces t-SNE-CUDA, a GPU-accelerated implementation of t-distributed Symmetric Neighbor Embedding (t-SNE) for visualizing datasets and models. t-SNE-CUDA significantly outperforms current implementations with 50-700x speedups on the CIFAR-10 and MNIST datasets. These speedups enable, for the first time, visualization of the neural network activations on the entire ImageNet dataset - a feat that was previously computationally intractable. We also demonstrate visualization performance in the NLP domain by visualizing the GloVe embedding vectors. From these visualizations, we can draw interesting conclusions about using the L2 metric in these embedding spaces.