opendilab / awesome-RLVRLinks
A curated list of reinforcement learning with verifiable rewards (continually updated)
☆32Updated last month
Alternatives and similar repositories for awesome-RLVR
Users that are interested in awesome-RLVR are comparing it to the libraries listed below
Sorting:
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆350Updated 3 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆138Updated 3 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆301Updated last week
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆348Updated 2 weeks ago
- ☆275Updated 3 months ago
- ☆300Updated 4 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆89Updated last year
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆327Updated last year
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆25Updated 3 months ago
- Direct preference optimization with f-divergences.☆14Updated 11 months ago
- ☆67Updated 6 months ago
- ☆415Updated last week
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆308Updated this week
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆305Updated 3 weeks ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- ☆211Updated 8 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆164Updated 7 months ago
- ☆210Updated 6 months ago
- An Awesome List of Agentic Model trained with Reinforcement Learning☆519Updated last week
- A version of verl to support diverse tool use☆607Updated this week
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆37Updated 3 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆194Updated last year
- A comprehensive collection of process reward models.☆114Updated 2 weeks ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 3 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆252Updated 2 months ago
- ☆171Updated 5 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆112Updated 2 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆76Updated 4 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆258Updated 5 months ago
- Official Repository of "Learning what reinforcement learning can't"☆66Updated last month