Stars
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
[EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821
GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
⚡ Dynamically generated, customizable SVG that gives the appearance of typing and deleting text for use on your profile page, repositories, or website.
Instruct-tune LLaMA on consumer hardware
⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SFT etc.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)
A japanese finetuned instruction LLaMA
骆驼(Luotuo): Open Sourced Chinese Language Models. Developed by 陈启源 @ 华中师范大学 & 李鲁鲁 @ 商汤科技 & 冷子昂 @ 商汤科技
Aligning pretrained language models with instruction data generated by themselves.
Code and documentation to train Stanford's Alpaca models, and generate the data.
中文文本分类,TextCNN,TextRNN,FastText,TextRCNN,BiLSTM_Attention,DPCNN,Transformer,基于pytorch,开箱即用。
1st Place Solution for Zhihu Machine Learning Challenge . Implementation of various text-classification models.(知乎看山杯第一名解决方案)
Summary and comparison of Chinese classification models