Stars
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.
Measuring Massive Multitask Language Understanding | ICLR 2021
Summarize existing representative LLMs text datasets.
High-quality datasets, tools, and concepts for LLM fine-tuning.
A curated list of awesome instruction tuning datasets, models, papers and repositories.
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
MINT-1T: A one trillion token multimodal interleaved dataset.
huangzhengxiang / mnn-llm
Forked from wangzhaode/mnn-llmllm deploy project based mnn.
Awesome speech/audio LLMs, representation learning, and codec models
自然语言处理学习笔记:机器学习及深度学习原理和示例,基于 Tensorflow 和 PyTorch 框架,Transformer、BERT、ALBERT等最新预训练模型及源代码详解,及基于预训练模型进行各种自然语言处理任务。模型部署
Code and documentation to train Stanford's Alpaca models, and generate the data.
Instruct-tune LLaMA on consumer hardware
中文nlp解决方案(大模型、数据、模型、训练、推理)