RLHF implementation details of OAI's 2019 codebase
☆197Jan 14, 2024Updated 2 years ago
Alternatives and similar repositories for lm-human-preference-details
Users that are interested in lm-human-preference-details are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆160Nov 23, 2024Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,384Jul 25, 2023Updated 2 years ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,421Mar 3, 2024Updated 2 years ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆124Aug 22, 2024Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Language models scale reliably with over-training and on downstream tasks☆101Apr 2, 2024Updated 2 years ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,883Aug 11, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,743Jan 8, 2024Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,384Mar 1, 2024Updated 2 years ago
- Train transformer language models with reinforcement learning.☆18,054Updated this week
- Recipes to train reward model for RLHF.☆1,529Apr 24, 2025Updated 11 months ago
- Evaluating Reward Models in Multilingual Settings (ACL Main '25)☆42May 16, 2025Updated 11 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,340Updated this week
- Official implementation of Categorical Flow Maps on text.☆51Feb 16, 2026Updated 2 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,348Dec 9, 2025Updated 4 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆182Feb 13, 2024Updated 2 years ago
- Robust recipes to align language models with human and AI preferences☆5,558Apr 8, 2026Updated last week
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,596Nov 24, 2025Updated 4 months ago
- The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization☆931Mar 23, 2024Updated 2 years ago
- Exploring Model Kinship for Merging Large Language Models☆28Apr 16, 2025Updated last year
- Gym wrapper for pysc2☆10Sep 16, 2022Updated 3 years ago
- Scaling Data-Constrained Language Models☆343Jun 28, 2025Updated 9 months ago
- Minimalistic large language model 3D-parallelism training☆2,654Apr 7, 2026Updated last week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A framework for PyTorch to enable fault management for collective communication libraries (CCL) such as NCCL☆20Feb 9, 2026Updated 2 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Apr 20, 2024Updated last year
- Code for "Learning to summarize from human feedback"☆1,061Sep 5, 2023Updated 2 years ago
- AllenAI's post-training codebase☆3,683Apr 13, 2026Updated last week
- ☆15Jul 16, 2021Updated 4 years ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,966Aug 9, 2025Updated 8 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆951Feb 16, 2025Updated last year
- ☆17Feb 19, 2024Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆592Dec 9, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,117Jun 1, 2023Updated 2 years ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆68Mar 5, 2026Updated last month
- ☆99Jun 27, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,244Aug 14, 2025Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.☆707Feb 16, 2026Updated 2 months ago
- Official repository for ORPO☆478May 31, 2024Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆903Sep 30, 2025Updated 6 months ago