mnoukhov / async_rlhfLinks
Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models
☆59Updated 2 months ago
Alternatives and similar repositories for async_rlhf
Users that are interested in async_rlhf are comparing it to the libraries listed below
Sorting:
- Simple and efficient pytorch-native transformer training and inference (batched)☆77Updated last year
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆129Updated this week
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆166Updated last month
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆96Updated 9 months ago
- ☆54Updated 2 weeks ago
- ☆53Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆45Updated 7 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated 3 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated 10 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 9 months ago
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆114Updated 2 months ago
- Replicating O1 inference-time scaling laws☆89Updated 7 months ago
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- A toolkit for scaling law research ⚖☆50Updated 5 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆58Updated 4 months ago
- ☆98Updated last year
- Can Language Models Solve Olympiad Programming?☆119Updated 6 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 5 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆30Updated last year
- ☆117Updated 4 months ago
- ☆98Updated last year
- ☆144Updated 7 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆75Updated 8 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆44Updated last year
- ☆66Updated last year
- ☆114Updated 5 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆91Updated 2 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆27Updated 7 months ago