From 192eeff40d7503b213195ae9a81c035324b9d550 Mon Sep 17 00:00:00 2001 From: lxy Date: Tue, 5 Nov 2024 13:43:40 +0800 Subject: [PATCH] update --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index aad5e2c..8f47014 100644 --- a/README.md +++ b/README.md @@ -197,7 +197,10 @@ You can get more information from our subsection. We introduce representative pa This repository offers access to over 20 high-performance large language models (LLMs) with comprehensive guides for pretraining, fine-tuning, and deploying at scale. It is designed to be beginner-friendly with from-scratch implementations and no complex abstractions. -* Fine-tuning and In-Context learning for BIRD-SQL benchmark [Repository Link](https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/bird#fine-tuning-ft) +* LLaMA-Factory [Repository Link](https://github.com/hiyouga/LLaMA-Factory) + Unified Efficient Fine-Tuning of 100+ LLMs. Integrating various models with scalable training resources, advanced algorithms, practical tricks, and comprehensive experiment monitoring tools, this setup enables efficient and faster inference through optimized APIs and UIs. + +* Fine-tuning and In-Context learning for BIRD-SQL benchmark [Repository Link](https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/bird#fine-tuning-ft) A tutorial for both Fine-tuning and In-Context Learning is provided by the BIRD-SQL benchmark.