Skip to content

Awesome-LLM: a curated list of Large Language Model

License

Notifications You must be signed in to change notification settings

zeynepozdemir/Awesome-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Awesome-LLM Awesome

🔥 Large Language Models(LLM) have taken the NLP community AI community the Whole World by storm. Here is a curated list of papers about large language models, especially relating to ChatGPT. It also contains frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs.

Trending LLM Projects

  • LWM - Large World Model (LWM) is a general-purpose large-context multimodal autoregressive model.
  • Sora - Sora is an AI model that can create realistic and imaginative scenes from text instructions.
  • Gemma - Gemma is built for responsible AI development from the same research and technology used to create Gemini models.
  • minbpe - Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.

Table of Content

Milestone Papers

Date keywords Institute Paper Publication
2017-06 Transformers Google Attention Is All You Need NeurIPS
Dynamic JSON Badge
2018-06 GPT 1.0 OpenAI Improving Language Understanding by Generative Pre-Training Dynamic JSON Badge
2018-10 BERT Google BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding NAACL
Dynamic JSON Badge
2019-02 GPT 2.0 OpenAI Language Models are Unsupervised Multitask Learners Dynamic JSON Badge
2019-09 Megatron-LM NVIDIA Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Dynamic JSON Badge
2019-10 T5 Google Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer JMLR
Dynamic JSON Badge
2019-10 ZeRO Microsoft ZeRO: Memory Optimizations Toward Training Trillion Parameter Models SC
Dynamic JSON Badge
2020-01 Scaling Law OpenAI Scaling Laws for Neural Language Models Dynamic JSON Badge
2020-05 GPT 3.0 OpenAI Language models are few-shot learners NeurIPS
Dynamic JSON Badge
2021-01 Switch Transformers Google Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity JMLR
Dynamic JSON Badge
2021-08 Codex OpenAI Evaluating Large Language Models Trained on Code Dynamic JSON Badge
2021-08 Foundation Models Stanford On the Opportunities and Risks of Foundation Models Dynamic JSON Badge
2021-09 FLAN Google Finetuned Language Models are Zero-Shot Learners ICLR
Dynamic JSON Badge
2021-10 T0 HuggingFace et al. Multitask Prompted Training Enables Zero-Shot Task Generalization ICLR
Dynamic JSON Badge
2021-12 GLaM Google GLaM: Efficient Scaling of Language Models with Mixture-of-Experts ICML
Dynamic JSON Badge
2021-12 WebGPT OpenAI WebGPT: Browser-assisted question-answering with human feedback Dynamic JSON Badge
2021-12 Retro DeepMind Improving language models by retrieving from trillions of tokens ICML
Dynamic JSON Badge
2021-12 Gopher DeepMind Scaling Language Models: Methods, Analysis & Insights from Training Gopher Dynamic JSON Badge
2022-01 COT Google Chain-of-Thought Prompting Elicits Reasoning in Large Language Models NeurIPS
Dynamic JSON Badge
2022-01 LaMDA Google LaMDA: Language Models for Dialog Applications Dynamic JSON Badge
2022-01 Minerva Google Solving Quantitative Reasoning Problems with Language Models NeurIPS
Dynamic JSON Badge
2022-01 Megatron-Turing NLG Microsoft&NVIDIA Using Deep and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model Dynamic JSON Badge
2022-03 InstructGPT OpenAI Training language models to follow instructions with human feedback Dynamic JSON Badge
2022-04 PaLM Google PaLM: Scaling Language Modeling with Pathways Dynamic JSON Badge
2022-04 Chinchilla DeepMind An empirical analysis of compute-optimal large language model training NeurIPS
Dynamic JSON Badge
2022-05 OPT Meta OPT: Open Pre-trained Transformer Language Models Dynamic JSON Badge
2022-05 UL2 Google Unifying Language Learning Paradigms ICLR
Dynamic JSON Badge
2022-06 Emergent Abilities Google Emergent Abilities of Large Language Models TMLR
Dynamic JSON Badge
2022-06 BIG-bench Google Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models Dynamic JSON Badge
2022-06 METALM Microsoft Language Models are General-Purpose Interfaces Dynamic JSON Badge
2022-09 Sparrow DeepMind Improving alignment of dialogue agents via targeted human judgements Dynamic JSON Badge
2022-10 Flan-T5/PaLM Google Scaling Instruction-Finetuned Language Models Dynamic JSON Badge
2022-10 GLM-130B Tsinghua GLM-130B: An Open Bilingual Pre-trained Model ICLR
Dynamic JSON Badge
2022-11 HELM Stanford Holistic Evaluation of Language Models Dynamic JSON Badge
2022-11 BLOOM BigScience BLOOM: A 176B-Parameter Open-Access Multilingual Language Model Dynamic JSON Badge
2022-11 Galactica Meta Galactica: A Large Language Model for Science Dynamic JSON Badge
2022-12 OPT-IML Meta OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization Dynamic JSON Badge
2023-01 Flan 2022 Collection Google The Flan Collection: Designing Data and Methods for Effective Instruction Tuning ICML
Dynamic JSON Badge
2023-02 LLaMA Meta LLaMA: Open and Efficient Foundation Language Models Dynamic JSON Badge
2023-02 Kosmos-1 Microsoft Language Is Not All You Need: Aligning Perception with Language Models Dynamic JSON Badge
2023-03 PaLM-E Google PaLM-E: An Embodied Multimodal Language Model ICML
Dynamic JSON Badge
2023-03 GPT 4 OpenAI GPT-4 Technical Report Dynamic JSON Badge
2023-04 Pythia EleutherAI et al. Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling ICML
Dynamic JSON Badge
2023-05 Dromedary CMU et al. Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision NeurIPS
Dynamic JSON Badge
2023-05 PaLM 2 Google PaLM 2 Technical Report Dynamic JSON Badge
2023-05 RWKV Bo Peng RWKV: Reinventing RNNs for the Transformer Era EMNLP
Dynamic JSON Badge
2023-05 DPO Stanford Direct Preference Optimization: Your Language Model is Secretly a Reward Model Neurips
Dynamic JSON Badge
2023-05 ToT Google&Princeton Tree of Thoughts: Deliberate Problem Solving with Large Language Models NeurIPS
Dynamic JSON Badge
2023-07 LLaMA 2 Meta Llama 2: Open Foundation and Fine-Tuned Chat Models Dynamic JSON Badge
2023-10 Mistral 7B Mistral Mistral 7B
Dynamic JSON Badge
2023-12 Mamba CMU&Princeton Mamba: Linear-Time Sequence Modeling with Selective State Spaces Dynamic JSON Badge

Other Papers

If you're interested in the field of LLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link:

Open LLM

There are three important steps for a ChatGPT-like LLM:

  • Pre-training
  • Instruction Tuning
  • Alignment

You may also find these leaderboards helpful:

  • Open LLM Leaderboard - aims to track, rank and evaluate LLMs and chatbots as they are released.
  • Chatbot Arena Leaderboard - a benchmark platform for large language models (LLMs) that features anonymous, randomized battles in a crowdsourced manner.
  • AlpacaEval Leaderboard - An Automatic Evaluator for Instruction-following Language Models
  • Open Ko-LLM Leaderboard - The Open Ko-LLM Leaderboard objectively evaluates the performance of Korean Large Language Model (LLM).
  • Yet Another LLM Leaderboard - Leaderboard made with LLM AutoEval using Nous benchmark suite.
  • OpenCompass 2.0 LLM Leaderboard - OpenCompass is an LLM evaluation platform, supporting a wide range of models (InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
  • Gemma - Gemma is built for responsible AI development from the same research and technology used to create Gemini models.
  • Mistral - Mistral-7B-v0.1 is a small, yet powerful model adaptable to many use-cases including code and 8k sequence length. Apache 2.0 licence.
  • Mixtral 8x7B - a high-quality sparse mixture of experts model (SMoE) with open weights.
  • LLaMA & LLaMA-2 - A foundational large language model. LLaMA.cpp Lit-LLaMA
    • Alpaca - A model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca.cpp Alpaca-LoRA
    • Flan-Alpaca - Instruction Tuning from Humans and Machines.
    • Baize - Baize is an open-source chat model trained with LoRA. It uses 100k dialogs generated by letting ChatGPT chat with itself.
    • Cabrita - A portuguese finetuned instruction LLaMA.
    • Vicuna - An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality.
    • Llama-X - Open Academic Research on Improving LLaMA to SOTA LLM.
    • Chinese-Vicuna - A Chinese Instruction-following LLaMA-based Model.
    • GPTQ-for-LLaMA - 4 bits quantization of LLaMA using GPTQ.
    • GPT4All - Demo, data, and code to train open-source assistant-style large language model based on GPT-J and LLaMa.
    • Koala - A Dialogue Model for Academic Research
    • BELLE - Be Everyone's Large Language model Engine
    • StackLLaMA - A hands-on guide to train LLaMA with RLHF.
    • RedPajama - An Open Source Recipe to Reproduce LLaMA training dataset.
    • Chimera - Latin Phoenix.
    • WizardLM|WizardCoder - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder.
    • CaMA - a Chinese-English Bilingual LLaMA Model.
    • Orca - Microsoft's finetuned LLaMA model that reportedly matches GPT3.5, finetuned against 5M of data, ChatGPT, and GPT4
    • BayLing - an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.
    • UltraLM - Large-scale, Informative, and Diverse Multi-round Chat Models.
    • Guanaco - QLoRA tuned LLaMA
    • ChiMed-GPT - A Chinese medical large language model.
    • RAFT - RAFT: A new way to teach LLMs to be better at RAG (paper).
    • Gorilla LLM - Gorilla: Large Language Model Connected with Massive APIs
    • LLaVa - LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.
  • BLOOM - BigScience Large Open-science Open-access Multilingual Language Model BLOOM-LoRA
    • BLOOMZ&mT0 - a family of models capable of following human instructions in dozens of languages zero-shot.
    • Phoenix
  • Deepseek
    • Coder - Let the Code Write Itself.
    • LLM - Let there be answers.
    • 知名私募巨头幻方量化旗下的人工智能公司深度求索(DeepSeek)自主研发的大语言模型开发的智能助手。包括 7B-base, 67B-base,
  • Yi - A series of large language models trained from scratch by developers @01-ai.
  • T5 - Text-to-Text Transfer Transformer
    • T0 - Multitask Prompted Training Enables Zero-Shot Task Generalization
  • OPT - Open Pre-trained Transformer Language Models.
  • UL2 - a unified framework for pretraining models that are universally effective across datasets and setups.
  • GLM- GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.
  • RWKV - Parallelizable RNN with Transformer-level LLM Performance.
    • ChatRWKV - ChatRWKV is like ChatGPT but powered by my RWKV (100% RNN) language model.
    • Trending Demo - RWKV-5 trained on 100+ world languages (70% English, 15% multilang, 15% code).
  • StableLM - Stability AI Language Models.
  • YaLM - a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world.
  • GPT-Neo - An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library.
  • GPT-J - A 6 billion parameter, autoregressive text generation model trained on The Pile.
    • Dolly - a cheap-to-build LLM that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT.
  • Pythia - Interpreting Autoregressive Transformers Across Time and Scale
    • Dolly 2.0 - the first open source, instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.
  • OpenFlamingo - an open-source reproduction of DeepMind's Flamingo model.
  • Cerebras-GPT - A Family of Open, Compute-efficient, Large Language Models.
  • GALACTICA - The GALACTICA models are trained on a large-scale scientific corpus.
    • GALPACA - GALACTICA 30B fine-tuned on the Alpaca dataset.
  • Palmyra - Palmyra Base was primarily pre-trained with English text.
  • Camel - a state-of-the-art instruction-following large language model designed to deliver exceptional performance and versatility.
  • h2oGPT
  • PanGu-α - PanGu-α is a 200B parameter autoregressive pretrained Chinese language model develped by Huawei Noah's Ark Lab, MindSpore Team and Peng Cheng Laboratory.
  • MOSS - MOSS是一个支持中英双语和多种插件的开源对话语言模型.
  • Open-Assistant - a project meant to give everyone access to a great chat based large language model.
    • HuggingChat - Powered by Open Assistant's latest model – the best open source chat model right now and @huggingface Inference API.
  • StarCoder - Hugging Face LLM for Code
  • MPT-7B - Open LLM for commercial use by MosaicML
  • Falcon - Falcon LLM is a foundational large language model (LLM) with 40 billion parameters trained on one trillion tokens. TII has now released Falcon LLM – a 40B model.
  • XGen - Salesforce open-source LLMs with 8k sequence length.
  • Baichuan - A series of large language models developed by Baichuan Intelligent Technology.
  • Aquila - 悟道·天鹰语言大模型是首个具备中英双语知识、支持商用许可协议、国内数据合规需求的开源语言大模型。
  • phi-1 - a new large language model for code, with significantly smaller size than competing models.
  • phi-1.5 - a 1.3 billion parameter model trained on a dataset of 30 billion tokens, which achieves common sense reasoning benchmark results comparable to models ten times its size that were trained on datasets more than ten times larger.
  • phi-2 - a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters.
  • InternLM / 书生·浦语 - Official release of InternLM2 7B and 20B base and chat models. 200K context support. Homepage | ModelScope
  • BlueLM-7B - BlueLM(蓝心大模型): Open large language models developed by vivo AI Lab. Homepage | ModelScope MoE-16B-base, 等. | Chat with DeepSeek (Beta)
  • Qwen series - The large language model series proposed by Alibaba Cloud. | 阿里云研发的通义千问大模型系列. 包括 7B, 72B, 及各种量化和Chat版本. Chat Demo
  • XVERSE series - Multilingual large language model developed by XVERSE Technology Inc | 由深圳元象科技自主研发的支持多语言的大语言模型. 包括7B, 13B, 65B等.
  • Skywork series - A series of large models developed by the Kunlun Group · Skywork team | 昆仑万维集团·天工团队开发的一系列大型模型.

LLM Training Frameworks

  • DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
  • Megatron-DeepSpeed - DeepSpeed version of NVIDIA's Megatron-LM that adds additional support for several features such as MoE model training, Curriculum Learning, 3D Parallelism, and others.
  • FairScale - FairScale is a PyTorch extension library for high performance and large scale training.
  • Megatron-LM - Ongoing research training transformer models at scale.
  • Colossal-AI - Making large AI models cheaper, faster, and more accessible.
  • BMTrain - Efficient Training for Big Models.
  • Mesh Tensorflow - Mesh TensorFlow: Model Parallelism Made Easier.
  • maxtext - A simple, performant and scalable Jax LLM!
  • Alpa - Alpa is a system for training and serving large-scale neural networks.
  • GPT-NeoX - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

LLM Evaluation Frameworks:

  • lm-evaluation-harness - A framework for few-shot evaluation of language models.
  • lighteval - a lightweight LLM evaluation suite that Hugging Face has been using internally.
  • OLMO-eval - a repository for evaluating open language models.
  • instruct-eval - This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.

Deploying Tools

  • Langfuse - Open Source LLM Engineering Platform 🪢 Tracing, Evaluations, Prompt Management, Evaluations and Playground.
  • FastChat - A distributed multi-model LLM serving system with web UI and OpenAI-compatible RESTful APIs.
  • MindSQL - A python package for Txt-to-SQL with self hosting functionalities and RESTful APIs compatible with proprietary as well as open source LLM.
  • SkyPilot - Run LLMs and batch jobs on any cloud. Get maximum cost savings, highest GPU availability, and managed execution -- all with a simple interface.
  • vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs
  • Text Generation Inference - A Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets, HFOIL Licence.
  • Haystack - an open-source NLP framework that allows you to use LLMs and transformer-based models from Hugging Face, OpenAI and Cohere to interact with your own data.
  • Sidekick - Data integration platform for LLMs.
  • LangChain - Building applications with LLMs through composability
  • Floom AI gateway and marketplace for developers, enables streamlined integration of AI features into products
  • Swiss Army Llama - Comprehensive set of tools for working with local LLMs for various tasks.
  • LiteChain - Lightweight alternative to LangChain for composing LLMs
  • magentic - Seamlessly integrate LLMs as Python functions
  • wechat-chatgpt - Use ChatGPT On Wechat via wechaty
  • promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality.
  • Agenta - Easily build, version, evaluate and deploy your LLM-powered apps.
  • Serge - a chat interface crafted with llama.cpp for running Alpaca models. No API keys, entirely self-hosted!
  • Langroid - Harness LLMs with Multi-Agent Programming
  • Embedchain - Framework to create ChatGPT like bots over your dataset.
  • CometLLM - A 100% opensource LLMOps platform to log, manage, and visualize your LLM prompts and chains. Track prompt templates, prompt variables, prompt duration, token usage, and other metadata. Score prompt outputs and visualize chat history all within a single UI.
  • IntelliServer - simplifies the evaluation of LLMs by providing a unified microservice to access and test multiple AI models.
  • OpenLLM - Fine-tune, serve, deploy, and monitor any open-source LLMs in production. Used in production at BentoML for LLMs-based applications.
  • DeepSpeed-Mii - MII makes low-latency and high-throughput inference, similar to vLLM powered by DeepSpeed.
  • Text-Embeddings-Inference - Inference for text-embeddings in Rust, HFOIL Licence.
  • Infinity - Inference for text-embeddings in Python
  • TensorRT-LLM - Nvidia Framework for LLM Inference
  • FasterTransformer - NVIDIA Framework for LLM Inference(Transitioned to TensorRT-LLM)
  • Flash-Attention - A method designed to enhance the efficiency of Transformer models
  • Langchain-Chatchat - Formerly langchain-ChatGLM, local knowledge based LLM (like ChatGLM) QA app with langchain.
  • Search with Lepton - Build your own conversational search engine using less than 500 lines of code by LeptonAI.
  • Robocorp - Create, deploy and operate Actions using Python anywhere to enhance your AI agents and assistants. Batteries included with an extensive set of libraries, helpers and logging.
  • LMDeploy - A high-throughput and low-latency inference and serving framework for LLMs and VLs

Prompting libraries & tools

  • YiVal — Evaluate and Evolve: YiVal is an open-source GenAI-Ops tool for tuning and evaluating prompts, configurations, and model parameters using customizable datasets, evaluation methods, and improvement strategies.
  • Guidance — A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control.
  • LangChain — A popular Python/JavaScript library for chaining sequences of language model prompts.
  • FLAML (A Fast Library for Automated Machine Learning & Tuning): A Python library for automating selection of models, hyperparameters, and other tunable choices.
  • Chainlit — A Python library for making chatbot interfaces.
  • Guardrails.ai — A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs.
  • Semantic Kernel — A Python/C#/Java library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning.
  • Prompttools — Open-source Python tools for testing and evaluating models, vector DBs, and prompts.
  • Outlines — A Python library that provides a domain-specific language to simplify prompting and constrain generation.
  • Promptify — A small Python library for using language models to perform NLP tasks.
  • Scale Spellbook — A paid product for building, comparing, and shipping language model apps.
  • PromptPerfect — A paid product for testing and improving prompts.
  • Weights & Biases — A paid product for tracking model training and prompt engineering experiments.
  • OpenAI Evals — An open-source library for evaluating task performance of language models and prompts.
  • LlamaIndex — A Python library for augmenting LLM apps with data.
  • Arthur Shield — A paid product for detecting toxicity, hallucination, prompt injection, etc.
  • LMQL — A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools.
  • ModelFusion - A TypeScript library for building apps with LLMs and other ML models (speech-to-text, text-to-speech, image generation).
  • Flappy — Production-Ready LLM Agent SDK for Every Developer.
  • GPTRouter - GPTRouter is an open source LLM API Gateway that offers a universal API for 30+ LLMs, vision, and image models, with smart fallbacks based on uptime and latency, automatic retries, and streaming. Stay operational even when OpenAI is down
  • QAnything - A local knowledge base question-answering system designed to support a wide range of file formats and databases.
    • Core modules: BCEmbedding - Bilingual and Crosslingual Embedding for RAG

Tutorials

  • [Maarten Grootendorst] A Visual Guide to Mamba and State Space Models blog
  • [Jack Cook] Mamba: The Easy Way
  • [Andrej Karpathy] minbpe video
  • [Andrej Karpathy] State of GPT video
  • [Hyung Won Chung] Instruction finetuning and RLHF lecture Youtube
  • [Jason Wei] Scaling, emergence, and reasoning in large language models Slides
  • [Susan Zhang] Open Pretrained Transformers Youtube
  • [Ameet Deshpande] How Does ChatGPT Work? Slides
  • [Yao Fu] 预训练,指令微调,对齐,专业化:论大语言模型能力的来源 Bilibili
  • [Hung-yi Lee] ChatGPT 原理剖析 Youtube
  • [Jay Mody] GPT in 60 Lines of NumPy Link
  • [ICML 2022] Welcome to the "Big Model" Era: Techniques and Systems to Train and Serve Bigger Models Link
  • [NeurIPS 2022] Foundational Robustness of Foundation Models Link
  • [Andrej Karpathy] Let's build GPT: from scratch, in code, spelled out. Video|Code
  • [DAIR.AI] Prompt Engineering Guide Link
  • [邱锡鹏] 大型语言模型的能力分析与应用 Slides | Video
  • [Philipp Schmid] Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers Link
  • [HuggingFace] Illustrating Reinforcement Learning from Human Feedback (RLHF) Link
  • [HuggingFace] What Makes a Dialog Agent Useful? Link
  • [张俊林]通向AGI之路:大型语言模型(LLM)技术精要 Link
  • [大师兄]ChatGPT/InstructGPT详解 Link
  • [HeptaAI]ChatGPT内核:InstructGPT,基于反馈指令的PPO强化学习 Link
  • [Yao Fu] How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources Link
  • [Stephen Wolfram] What Is ChatGPT Doing … and Why Does It Work? Link
  • [Jingfeng Yang] Why did all of the public reproduction of GPT-3 fail? Link
  • [Hung-yi Lee] ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程 Video
  • [Keyvan Kambakhsh] Pure Rust implementation of a minimal Generative Pretrained Transformer code
  • [过拟合] llm大模型训练知乎专栏 Link
  • [StatQuest] Sequence-to-Sequence (seq2seq) Encoder-Decoder Neural Networks Link
  • [StatQuest] Transformer Neural Networks, ChatGPT's foundation Link
  • [StatQuest] Decoder-Only Transformers, ChatGPTs specific Transformer Link
  • [康斯坦丁] Understanding Language Processing Through Embedding Link

Courses

  • [UWaterloo] CS 886: Recent Advances on Foundation Models Homepage
  • [DeepLearning.AI] ChatGPT Prompt Engineering for Developers Homepage
  • [Princeton] Understanding Large Language Models Homepage
  • [OpenBMB] 大模型公开课 主页
  • [Stanford] CS224N-Lecture 11: Prompting, Instruction Finetuning, and RLHF Slides
  • [Stanford] CS324-Large Language Models Homepage
  • [Stanford] CS25-Transformers United V2 Homepage
  • [Stanford Webinar] GPT-3 & Beyond Video
  • [李沐] InstructGPT论文精读 Bilibili Youtube
  • [陳縕儂] OpenAI InstructGPT 從人類回饋中學習 ChatGPT 的前身 Youtube
  • [李沐] HELM全面语言模型评测 Bilibili
  • [李沐] GPT,GPT-2,GPT-3 论文精读 Bilibili Youtube
  • [Aston Zhang] Chain of Thought论文 Bilibili Youtube
  • [MIT] Introduction to Data-Centric AI Homepage
  • [DeepLearning.AI] Building Applications with Vector Databases Homepage
  • [DeepLearning.AI] Building Systems with the ChatGPT API Homepage
  • [DeepLearning.AI] LangChain for LLM Application Development Homepage
  • [DeepLearning.AI] LangChain: Chat with Your Data Homepage
  • [DeepLearning.AI] Finetuning Large Language Models Homepage
  • [DeepLearning.AI] Build LLM Apps with LangChain.js Homepage
  • [DeepLearning.AI] Large Language Models with Semantic Search Homepage
  • [DeepLearning.AI] LLMOps Homepage
  • [DeepLearning.AI] Building and Evaluating Advanced RAG Applications Homepage
  • [DeepLearning.AI] Quality and Safety for LLM Applications Homepage
  • [DeepLearning.AI] Vector Databases: from Embeddings to Applications Homepage
  • [DeepLearning.AI] Functions, Tools and Agents with LangChain Homepage
  • [Arize] LLM Observability: Evaluations Homepage
  • [Arize] LLM Observability: Traces and Spans Homepage

Books

Opinions

Other Useful Resources

  • Arize-Phoenix - Open-source tool for ML observability that runs in your notebook environment. Monitor and fine tune LLM, CV and Tabular Models.
  • Emergent Mind - The latest AI news, curated & explained by GPT-4.
  • ShareGPT - Share your wildest ChatGPT conversations with one click.
  • Major LLMs + Data Availability
  • 500+ Best AI Tools
  • Cohere Summarize Beta - Introducing Cohere Summarize Beta: A New Endpoint for Text Summarization
  • chatgpt-wrapper - ChatGPT Wrapper is an open-source unofficial Python API and CLI that lets you interact with ChatGPT.
  • Open-evals - A framework extend openai's Evals for different language model.
  • Cursor - Write, edit, and chat about your code with a powerful AI.
  • AutoGPT - an experimental open-source application showcasing the capabilities of the GPT-4 language model.
  • OpenAGI - When LLM Meets Domain Experts.
  • HuggingGPT - Solving AI Tasks with ChatGPT and its Friends in HuggingFace.
  • EasyEdit - An easy-to-use framework to edit large language models.
  • chatgpt-shroud - A Chrome extension for OpenAI's ChatGPT, enhancing user privacy by enabling easy hiding and unhiding of chat history. Ideal for privacy during screen shares.
  • MTEB - Massive Text Embedding Benchmark Leaderboard
  • xFormer - A PyTorch based library which hosts flexible Transformers parts

Contributing

This is an active repository and your contributions are always welcome!

I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.


If you have any question about this opinionated list, do not hesitate to contact me [email protected].

About

Awesome-LLM: a curated list of Large Language Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published