RLHF implementation details of OAI's 2019 codebase
☆197Jan 14, 2024Updated 2 years ago
Alternatives and similar repositories for lm-human-preference-details
Users that are interested in lm-human-preference-details are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆160Nov 23, 2024Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,382Jul 25, 2023Updated 2 years ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,422Mar 3, 2024Updated 2 years ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆123Aug 22, 2024Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆843Jul 1, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Language models scale reliably with over-training and on downstream tasks☆101Apr 2, 2024Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,872Aug 11, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,744Jan 8, 2024Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,383Mar 1, 2024Updated 2 years ago
- Train transformer language models with reinforcement learning.☆17,781Updated this week
- Recipes to train reward model for RLHF.☆1,523Apr 24, 2025Updated 11 months ago
- Evaluating Reward Models in Multilingual Settings (ACL Main '25)☆41May 16, 2025Updated 10 months ago
- Official implementation of Categorical Flow Maps on text.☆49Feb 16, 2026Updated last month
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Mar 24, 2026Updated last week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,337Dec 9, 2025Updated 3 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Feb 13, 2024Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,544Sep 8, 2025Updated 6 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,591Nov 24, 2025Updated 4 months ago
- The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization☆930Mar 23, 2024Updated 2 years ago
- Exploring Model Kinship for Merging Large Language Models☆28Apr 16, 2025Updated 11 months ago
- Gym wrapper for pysc2☆10Sep 16, 2022Updated 3 years ago
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 9 months ago
- Minimalistic large language model 3D-parallelism training☆2,626Feb 19, 2026Updated last month
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Apr 20, 2024Updated last year
- Code for "Learning to summarize from human feedback"☆1,060Sep 5, 2023Updated 2 years ago
- AllenAI's post-training codebase☆3,662Updated this week
- ☆15Jul 16, 2021Updated 4 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,961Aug 9, 2025Updated 7 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆949Feb 16, 2025Updated last year
- ☆17Feb 19, 2024Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆591Dec 9, 2024Updated last year
- Open-source Human Feedback Library☆11Oct 25, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,110Jun 1, 2023Updated 2 years ago
- ☆99Jun 27, 2024Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Mar 5, 2026Updated 3 weeks ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,236Aug 14, 2025Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆707Feb 16, 2026Updated last month
- Official repository for ORPO☆473May 31, 2024Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆907Sep 30, 2025Updated 6 months ago