- China Zhejiang Hangzhou
Stars
Zero-shot prediction of mutation effects on protein function with multimodal deep representation learning
Scripts to benchmark and train foldseek
Vector Quantized VAEs - PyTorch Implementation
Official implementation of "Learning the language of protein structures"
AIDO.ModelGenerator is a software stack powering the development of an AI-driven Digital Organism (AIDO) by enabling researchers to adapt pretrained models and generate finetuned models for downstr…
[NeurIPS 2024] BEACON: Benchmark for Comprehensive RNA Tasks and Language Models
Code for the ProteinMPNN paper
This is an official implementation for "MMSite: A Multi-modal Framework for the Identification of Active Sites in Proteins".
Code for ProSST: A Pre-trained Protein Sequence and Structure Transformer with Disentangled Attention.
Foldseek enables fast and sensitive comparisons of large structure sets.
ProTrek: Navigating the Protein Universe through Tri-Modal Contrastive Learning
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
Saprot: Protein Language Model with Structural Alphabet (AA+3Di)
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, B…
The official code for "TaxDiff: Taxonomic-Guided Diffusion Model for Protein Sequence Generation"
A chatbot platform that can support multiple models from different manufacturers。
FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, le…
A Text-guided Protein Design Framework, Nat Mach Intell 2025
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
[ICML-23 ORAL] ProtST: Multi-Modality Learning of Protein Sequences and Biomedical Texts
InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions
Code for ALBEF: a new vision-language pre-training method
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.