RLHF implementation details of OAI's 2019 codebase
☆197Jan 14, 2024Updated 2 years ago
Alternatives and similar repositories for lm-human-preference-details
Users that are interested in lm-human-preference-details are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆163Nov 23, 2024Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,389Jul 25, 2023Updated 2 years ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,424Mar 3, 2024Updated 2 years ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆124Aug 22, 2024Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆844Jul 1, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Language models scale reliably with over-training and on downstream tasks☆101Apr 2, 2024Updated 2 years ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,888Aug 11, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,745Jan 8, 2024Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,387Mar 1, 2024Updated 2 years ago
- Train transformer language models with reinforcement learning.☆18,282Updated this week
- Recipes to train reward model for RLHF.☆1,531Apr 24, 2025Updated last year
- Evaluating Reward Models in Multilingual Settings (ACL Main '25)☆42May 16, 2025Updated 11 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,441Updated this week
- Official implementation of Categorical Flow Maps on text.☆56Feb 16, 2026Updated 2 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,358Dec 9, 2025Updated 5 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆181Feb 13, 2024Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,593Apr 8, 2026Updated last month
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,600Nov 24, 2025Updated 5 months ago
- The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization☆935Mar 23, 2024Updated 2 years ago
- Exploring Model Kinship for Merging Large Language Models☆28Apr 16, 2025Updated last year
- Gym wrapper for pysc2☆10Sep 16, 2022Updated 3 years ago
- Scaling Data-Constrained Language Models☆343Jun 28, 2025Updated 10 months ago
- Minimalistic large language model 3D-parallelism training☆2,678Apr 7, 2026Updated last month
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- A framework for PyTorch to enable fault management for collective communication libraries (CCL) such as NCCL☆20Feb 9, 2026Updated 3 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Apr 20, 2024Updated 2 years ago
- Code for "Learning to summarize from human feedback"☆1,063Sep 5, 2023Updated 2 years ago
- AllenAI's post-training codebase☆3,708May 3, 2026Updated last week
- ☆15Jul 16, 2021Updated 4 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,982Aug 9, 2025Updated 9 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆953Feb 16, 2025Updated last year
- ☆17Feb 19, 2024Updated 2 years ago
- Open-source Human Feedback Library☆11Oct 25, 2023Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆593Dec 9, 2024Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,128Jun 1, 2023Updated 2 years ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Mar 5, 2026Updated 2 months ago
- ☆99Jun 27, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,246Aug 14, 2025Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.☆713Feb 16, 2026Updated 2 months ago
- Official repository for ORPO☆483May 31, 2024Updated last year