rosieyzh / openrlhf-pretrain
Code for "Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining"
☆15Updated last month
Alternatives and similar repositories for openrlhf-pretrain
Users that are interested in openrlhf-pretrain are comparing it to the libraries listed below
Sorting:
- Code for "Reasoning to Learn from Latent Thoughts"☆94Updated last month
- Rewarded soups official implementation☆57Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆43Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆28Updated last year
- ☆31Updated 4 months ago
- Self-Supervised Alignment with Mutual Information☆18Updated 11 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆58Updated last month
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆63Updated last month
- ☆15Updated this week
- Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆77Updated 3 weeks ago
- ☆14Updated last year
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆31Updated last month
- Exploration of automated dataset selection approaches at large scales.☆40Updated 2 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆36Updated last year
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 3 months ago
- implementation of dualformer☆17Updated 2 months ago
- Implementation of Direct Preference Optimization☆16Updated last year
- ☆85Updated last year
- ☆15Updated 6 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated last month
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆50Updated 5 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆29Updated 7 months ago
- Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers☆18Updated 2 months ago
- ☆92Updated 10 months ago
- ☆40Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 8 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆52Updated 2 months ago
- GenRM-CoT: Data release for verification rationales☆59Updated 6 months ago
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆28Updated last month
- ☆15Updated 3 weeks ago