Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
-
Updated
Jan 14, 2024 - Python
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
A curated list of reinforcement learning with human feedback resources (continually updated)
Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.
Let's build better datasets, together!
[CVPR 2024] Code for the paper "Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model"
The ParroT framework to enhance and regulate the Translation Abilities during Chat based on open-sourced LLMs (e.g., LLaMA-7b, Bloomz-7b1-mt) and human written translation and evaluation data.
Implementation of Reinforcement Learning from Human Feedback (RLHF)
Product analytics for AI Assistants
BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).
The Prism Alignment Project
Dataset Viber is your chill repo for data collection, annotation and vibe checks.
[ECCV2024] Towards Reliable Advertising Image Generation Using Human Feedback
[ICML 2024] Code for the paper "Confronting Reward Overoptimization for Diffusion Models: A Perspective of Inductive and Primacy Biases"
Code for the paper "Aligning LLM Agents by Learning Latent Preference from User Edits".
[ NeurIPS 2023 ] Official Codebase for "Aligning Synthetic Medical Images with Clinical Knowledge using Human Feedback"
Reinforcement Learning from Human Feedback with 🤗 TRL
Documentation at
Search Engine Optimization using Human Implicit Feedback
Add a description, image, and links to the human-feedback topic page so that developers can more easily learn about it.
To associate your repository with the human-feedback topic, visit your repo's landing page and select "manage topics."