Skip to content

[ICLR 2023] "HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing" by Tianlong Chen*, Chengyue Gong*, Daniel Jesus Diaz, Xuxi Chen, Jordan Tyler Wells, Qiang Liu, Zhangyang Wang, Andrew Ellington, Alex Dimakis, Adam Klivans

License

Notifications You must be signed in to change notification settings

YaoQ/HotProtein

 
 

Repository files navigation

HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing

License: MIT

The official implementation of ICLR 2023 paper HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing.

Abstract

The molecular basis of protein thermal stability is only partially understood and has major significance for drug and vaccine discovery. The lack of datasets and standardized benchmarks considerably limits learning-based discovery methods. We present HotProtein, a large-scale protein dataset with growth temperature annotations of thermostability, containing 182K amino acid sequences and 3K folded structures from 230 different species with a wide temperature range $-20^{\circ}\texttt{C}\sim 120^{\circ}\texttt{C}$. Due to functional domain differences and data scarcity within each species, existing methods fail to generalize well on our dataset. We address this problem through a novel learning framework, consisting of (1) Protein structure-aware pre-training (SAP) which leverages 3D information to enhance sequence-based pre-training; (2) Factorized sparse tuning (FST) that utilizes low-rank and sparse priors as an implicit regularization, together with feature augmentations. Extensive empirical studies demonstrate that our framework improves thermostability prediction compared to other deep learning models. Finally, we introduce a novel editing algorithm to efficiently generate positive amino acid mutations that improve thermostability.

Usage

Environment

pip install -e .
pip install wandb
pip install pytorch

Datasets

HP-S2C2: Google Drive

HP-S2C5: Google Drive

HP-S: Google Drive

Checkpoints

Please find the outcomes of protein structure-aware pre-training (SAP) in this link.

Training

We provide sample training scripts in the scripts folder.

# train esm1b_t33_650M_UR50D with HP-S2C2 using sap.pt model
bash scripts/s2c2_classification.sh  

# train esm2_t33_650M_UR50D with HP-S2C2 dataset ``
bash scripts/s2c2_classification.sh  esm2_t33_650M_UR50D

# train esm2_t6_8M_UR50D with HP-S2C2 dataset
bash scripts/s2c2_classification.sh  esm2_t6_8M_UR50D 


# train esm2_t36_3B_UR50D with HP-S dataset, need more than 14 GB GPU memory
bash scripts/s_classification.sh esm2_t36_3B_UR50D 

# train esm2_t48_15B_UR50D with HP-S dataset, need more  than 48GB GPU memory
bash scripts/s_classification.sh esm2_t48_15B_UR50D 

Acknowledgement

Our codes are developed based on esm.

About

[ICLR 2023] "HotProtein: A Novel Framework for Protein Thermostability Prediction and Editing" by Tianlong Chen*, Chengyue Gong*, Daniel Jesus Diaz, Xuxi Chen, Jordan Tyler Wells, Qiang Liu, Zhangyang Wang, Andrew Ellington, Alex Dimakis, Adam Klivans

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%