Highlights
- Pro
Stars
The code of our paper "RaSeRec: Retrieval-Augmented Sequential Recommendation"
[ICLR'24 spotlight] An open platform for training, serving, and evaluating large language model for tool learning.
Official Implementation codes of "Bridging Local Details and Global Context in Text-Attributed Graphs" (EMNLP 2024 Main)
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving Product Search
推荐系统入门教程,在线阅读地址:https://datawhalechina.github.io/fun-rec/
Generative Representational Instruction Tuning
Official repository of the MIRAGE benchmark
总结梳理自然语言处理工程师(NLP)需要积累的各方面知识,包括面试题,各种基础知识,工程能力等等,提升核心竞争力
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
Curated list of project-based tutorials
Azure OpenAI Service Proxy. Convert OpenAI official API request to Azure OpenAI API request. Support GPT-4,Embeddings,Langchain. Adapter from OpenAI to Azure OpenAI.
This repository contains code for EMNLP2023 paper, Is too much context detrimental for open-domain question answering?
The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Code and documentation to train Stanford's Alpaca models, and generate the data.
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…
用于从头预训练+SFT一个小参数量的中文LLaMa2的仓库;24G单卡即可运行得到一个具备简单中文问答能力的chat-llama2.
该仓库尝试整理推荐系统领域的一些经典算法模型
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
搜集、整理、发布 中文 自然语言处理 语料/数据集,与 有志之士 共同 促进 中文 自然语言处理 的 发展。