Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)
☆201Dec 16, 2023Updated 2 years ago
Alternatives and similar repositories for ReMax
Users that are interested in ReMax are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆29Dec 19, 2023Updated 2 years ago
- [ACL'24 Findings] Official code for "TLCR: Token-Level Continuous Reward for Fine-grained Reinforcement Learning from Human Feedback"☆12Dec 6, 2024Updated last year
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆52May 12, 2025Updated 10 months ago
- ☆12May 14, 2024Updated last year
- Code for EMNLP'24 paper - On Diversified Preferences of Large Language Model Alignment☆16Aug 6, 2024Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Jan 17, 2024Updated 2 years ago
- A trainable user simulator☆34Jun 30, 2025Updated 8 months ago
- This repo contains the code for the paper "Understanding and Mitigating Hallucinations in Large Vision-Language Models via Modular Attrib…☆35Jul 14, 2025Updated 8 months ago
- Code for ICLR 2022 Paper (HyperDQN: A Randomized Exploration Method for Deep Reinforcement Learning)☆12Nov 28, 2023Updated 2 years ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Jan 8, 2025Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75May 20, 2025Updated 10 months ago
- Implementation of ICLR 2025 paper "Q-Adapter: Customizing Pre-trained LLMs to New Preferences with Forgetting Mitigation"☆18Oct 5, 2024Updated last year
- A library for constrained RLHF.☆13Feb 19, 2024Updated 2 years ago
- PyTorch implementations for Offline Preference-Based RL (PbRL) algorithms☆21Mar 24, 2025Updated last year
- ☆64Apr 9, 2024Updated last year
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆20Mar 9, 2025Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,591Nov 24, 2025Updated 4 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,422Mar 3, 2024Updated 2 years ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,868Aug 11, 2024Updated last year
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆453May 13, 2025Updated 10 months ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆202Apr 17, 2025Updated 11 months ago
- Official repository for ORPO☆473May 31, 2024Updated last year
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆355Jun 18, 2023Updated 2 years ago
- RewardBench: the first evaluation tool for reward models.☆705Feb 16, 2026Updated last month
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 11 months ago
- Reproduce R1 Zero on Logic Puzzle☆2,441Mar 20, 2025Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,231Updated this week
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆591Dec 9, 2024Updated last year
- Scalable toolkit for efficient model alignment☆850Oct 6, 2025Updated 5 months ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Jan 16, 2025Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆38Aug 11, 2024Updated last year
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆333Apr 24, 2025Updated 11 months ago
- ☆43Sep 3, 2024Updated last year
- RLA is a tool for managing your RL experiments automatically☆72Feb 7, 2023Updated 3 years ago
- Code and data for "MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models"☆51Nov 18, 2025Updated 4 months ago
- A recipe for online RLHF and online iterative DPO.☆545Dec 28, 2024Updated last year
- Collection of papers for scalable automated alignment.☆93Oct 22, 2024Updated last year