Roshan Rao

I am a co-founder and research scientist at EvolutionaryScale, working on evolutionary models of proteins. Previously, I worked at Meta AI and completed my PhD at UC Berkeley, where I was advised by John Canny and Pieter Abbeel. Check out my dissertation talk for an introduction to this area of research and an overview of my past work!

Talks

All Papers

Simulating 500 million years of evolution with a language model

bioRxiv 2024.

Thomas Hayes,

Roshan Rao,

Halil Akin,

Nicholas J. Sofroniew,

Deniz Oktay,

Zeming Lin,

Robert Verkuil,

Vincent Q. Tran,

Jonathan Deaton,

Marius Wiggert,

Rohil Badkundri,

Irhum Shafkat,

Jun Gong,

Alexander Derry,

Raul S. Molina,

Neil Thomas,

Yousuf Khan,

Chetan Mishra,

Carolyn Kim,

Liam J. Bartie,

Matthew Nemeth,

Patrick D. Hsu,

Tom Sercu,

Salvatore Candido,

Alexander Rives

More than three billion years of evolution have produced an image of biology encoded into the space of natural proteins. Here we show that language models trained on tokens generated by evolution can act as evolutionary simulators to generate functional proteins that are far away from known proteins. We present ESM3, a frontier multimodal generative language model that reasons over the sequence, structure, and function of proteins. ESM3 can follow complex prompts combining its modalities and is highly responsive to biological alignment. We have prompted ESM3 to generate fluorescent proteins with a chain of thought. Among the generations that we synthesized, we found a bright fluorescent protein at far distance (58% identity) from known fluorescent proteins. Similarly distant natural fluorescent proteins are separated by over five hundred million years of evolution.

Evolutionary-scale prediction of atomic level protein structure with a language model

Science 2023.

Zeming Lin,

Halil Akin,

Roshan Rao,

Brian Hie,

Zhongkai Zhu,

Wenting Lu,

Nikita Smetanin,

Robert Verkuil,

Ori Kabeli,

Yaniv Shmueli,

Allan dos Santos Costa,

Maryam Fazel-Zarandi,

Tom Sercu,

Salvatore Candido,

Alexander Rives

Artificial intelligence has the potential to open insight into the structure of proteins at the scale of evolution. It has only recently been possible to extend protein structure prediction to two hundred million cataloged proteins. Characterizing the structures of the exponentially growing billions of protein sequences revealed by large scale gene sequencing experiments would necessitate a break-through in the speed of folding. Here we show that direct inference of structure from primary sequence using a large language model enables an order of magnitude speed-up in high resolution structure prediction. Leveraging the insight that language models learn evolutionary patterns across millions of sequences, we train models up to 15B parameters, the largest language model of proteins to date. As the language models are scaled they learn information that enables prediction of the three-dimensional structure of a protein at the resolution of individual atoms. This results in prediction that is up to 60x faster than state-of-the-art while maintaining resolution and accuracy. Building on this, we present the ESM Metage-nomic Atlas. This is the first large-scale structural characterization of metagenomic proteins, with more than 617 million structures. The atlas reveals more than 225 million high confidence predictions, including millions whose structures are novel in comparison with experimentally determined structures, giving an unprecedented view into the vast breadth and diversity of the structures of some of the least understood proteins on earth.

A high-level programming language for generative protein design

bioRxiv 2022.

Brian Hie,

Salvatore Candido,

Zeming Lin,

Ori Kabeli,

Roshan Rao,

Nikita Smetanin,

Tom Sercu,

Alexander Rives

Combining a basic set of building blocks into more complex forms is a universal design principle. Most protein designs have proceeded from a manual bottom-up approach using parts created by nature, but top-down design of proteins is fundamentally hard due to biological complexity. We demonstrate how the modularity and programmability long sought for protein design can be realized through generative artificial intelligence. Advanced protein language models demonstrate emergent learning of atomic resolution structure and protein design principles. We leverage these developments to enable the programmable design of de novo protein sequences and structures of high complexity. First, we describe a high-level programming language based on modular building blocks that allows a designer to easily compose a set of desired properties. We then develop an energy-based generative model, built on atomic resolution structure prediction with a language model, that realizes all-atom structure designs that have the programmed properties. Designing a diverse set of specifications, including constraints on atomic coordinates, secondary structure, symmetry, and multimerization, demonstrates the generality and controllability of the approach. Enumerating constraints at increasing levels of hierarchical complexity shows that the approach can access a combinatorially large design space.

Language models enable zero-shot prediction of the effects of mutations on protein function

NeurIPS 2021.

Joshua Meier,

Roshan Rao,

Robert Verkuil,

Jason Liu,

Tom Sercu,

Alexander Rives

Modeling the effect of sequence variation on function is a fundamental problem for understanding and designing proteins. Since evolution encodes information about function into patterns in protein sequences, unsupervised models of variant effects can be learned from sequence data. The approach to date has been to fit a model to a family of related sequences. The conventional setting is limited, since a new model must be trained for each prediction task. We show that using only zero-shot inference, without any supervision from experimental data or additional training, protein language models capture the functional effects of sequence variation, performing at state-of-the-art.

MSA Transformer

ICML 2021.

Roshan Rao,

Jason Liu,

Robert Verkuil,

Joshua Meier,

John Canny,

Pieter Abbeel,

Tom Sercu,

Alexander Rives

Unsupervised protein language models trained across millions of diverse sequences learn structure and function of proteins. Protein language models studied to date have been trained to perform inference from individual sequences. The longstanding approach in computational biology has been to make inferences from a family of evolutionarily related sequences by fitting a model to each family independently. In this work we combine the two paradigms. We introduce a protein language model which takes as input a set of sequences in the form of a multiple sequence alignment. The model interleaves row and column attention across the input sequences and is trained with a variant of the masked language modeling objective across many protein families. The performance of the model surpasses current state-of-the-art unsupervised structure learning methods by a wide margin, with far greater parameter efficiency than prior state-of-the-art protein language models.

Transformer protein language models are unsupervised structure learners

ICLR 2021.

Roshan Rao,

Joshua Meier,

Tom Sercu,

Sergey Ovchinnikov,

Alexander Rives

Unsupervised contact prediction is central to uncovering physical, structural, and functional constraints for protein structure determination and design. For decades, the predominant approach has been to infer evolutionary constraints from a set of related sequences. In the past year, protein language models have emerged as a potential alternative, but performance has fallen short of state-of-the-art approaches in bioinformatics. In this paper we demonstrate that Transformer attention maps learn contacts from the unsupervised language modeling objective. We find the highest capacity models that have been trained to date already outperform a state-of-the-art unsupervised contact prediction pipeline, suggesting these pipelines can be replaced with a single forward pass of an end-to-end model.

Single Layers of Attention Suffice to Predict Protein Contacts

bioRxiv 2020.

Nicholas Bhattacharya*,

Neil Thomas*,

Roshan Rao,

Justas Dauparas,

Peter K. Koo,

David Baker,

Sergey Ovchinnikov

The established approach to unsupervised protein contact prediction estimates co-evolving positions using undirected graphical models. This approach trains a Potts model on a Multiple Sequence Alignment, then predicts that the edges with highest weight correspond to contacts in the 3D structure. On the other hand, increasingly large Transformers are being pretrained on protein sequence databases but have demonstrated mixed results for downstream tasks, including contact prediction. This has sparked discussion about the role of scale and attention-based models in unsupervised protein representation learning. We argue that attention is a principled model of protein interactions, grounded in real properties of protein family data. We introduce a simplified attention layer, factored attention, and show that it achieves comparable performance to Potts models, while sharing parameters both within and across families. Further, we extract contacts from the attention maps of a pretrained Transformer and show they perform competitively with the other two approaches. This provides evidence that large-scale pretraining can learn meaningful protein features when presented with unlabeled and unaligned data. We contrast factored attention with the Transformer to indicate that the Transformer leverages hierarchical signal in protein family databases not captured by our single-layer models. This raises the exciting possibility for the development of powerful structured models of protein family databases.

Evaluating Protein Transfer Learning with TAPE

NeurIPS 2019. Spotlight.

Roshan Rao*,

Nicholas Bhattacharya*,

Neil Thomas*,

Yan Duan,


Xi Chen,

John Canny,

Pieter Abbeel,

Yun S. Song

Protein modeling is an increasingly popular area of machine learning research. Semi-supervised learning has emerged as an important paradigm in protein modeling due to the high cost of acquiring supervised protein labels, but the current literature is fragmented when it comes to datasets and standardized evaluation techniques. To facilitate progress in this field, we introduce the Tasks Assessing Protein Embeddings (TAPE), a set of five biologically relevant semi-supervised learning tasks spread across different domains of protein biology. We curate tasks into specific training, validation, and test splits to ensure that each task tests biologically relevant generalization that transfers to real-life scenarios. We bench-mark a range of approaches to semi-supervised protein representation learning, which span recent work as well as canonical sequence learning techniques. We find that self-supervised pretraining is helpful for almost all models on all tasks, more than doubling performance in some cases. Despite this increase, in several cases features learned by self-supervised pretraining still lag behind features extracted by state-of-the-art non-neural techniques. This gap in performance suggests a huge opportunity for innovative architecture design and improved modeling paradigms that better capture the signal in biological sequences. TAPE will help the machine learning community focus effort on scientifically relevant problems. Toward this end, all data and code used to run these experiments are available at https://github.com/songlab-cal/tape.

GPU-Accelerated t-SNE and its Applications to Modern Data

High Performance Machine Learning (HPML) 2018. Outstanding Paper Award.

David Chan*,

Roshan Rao*,

Forrest Huang*,

John Canny

This paper introduces t-SNE-CUDA, a GPU-accelerated implementation of t-distributed Symmetric Neighbor Embedding (t-SNE) for visualizing datasets and models. t-SNE-CUDA significantly outperforms current implementations with 50-700x speedups on the CIFAR-10 and MNIST datasets. These speedups enable, for the first time, visualization of the neural network activations on the entire ImageNet dataset - a feat that was previously computationally intractable. We also demonstrate visualization performance in the NLP domain by visualizing the GloVe embedding vectors. From these visualizations, we can draw interesting conclusions about using the L2 metric in these embedding spaces.