Stars
Comprehensive tools and frameworks for developing foundation models tailored to recommendation systems.
A simple, easy-to-hack GraphRAG implementation
OpenXAI : Towards a Transparent Evaluation of Model Explanations
Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Big Data project analyzing forum questions on Stack Exchange. Predicts accepted answers using Stack Exchange Data Dump from archive.org.
Python tools for processing the stackexchange data dumps into a text dataset for Language Models
A topic-centric list of HQ open datasets.
Official repo for the paper "Scaling Synthetic Data Creation with 1,000,000,000 Personas"
A curated list of research papers and resources on Cultural LLM.
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
An MBTI Exploration of Large Language Models
Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.
This repo aims to record resource of role-playing abilities in LLMs, including dataset, paper, application, etc.
[NAACL Findings 2024] PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits
This repo contains code for our NeurIPS 2023 spotlight paper: Evaluating and Inducing Personality in Pre-trained Language Models
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference,…
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark
Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"
Implementation of paper Data Engineering for Scaling Language Models to 128K Context
Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch
Implementation of Memformer, a Memory-augmented Transformer, in Pytorch
booydar / LM-RMT
Forked from kimiyoung/transformer-xlRecurrent Memory Transformer
Unofficial PyTorch/🤗Transformers(Gemma/Llama3) implementation of Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention