opendilab / awesome-RLVRLinks
A curated list of reinforcement learning with verifiable rewards (continually updated)
☆44Updated this week
Alternatives and similar repositories for awesome-RLVR
Users that are interested in awesome-RLVR are comparing it to the libraries listed below
Sorting:
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆373Updated last month
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆145Updated last month
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆398Updated 5 months ago
- ☆319Updated 6 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆388Updated 2 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆337Updated 2 months ago
- ☆292Updated 5 months ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆38Updated last year
- 🔥🔥🔥Latest Papers, Codes on Uncertainty-based RL☆56Updated 3 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Updated 2 years ago
- Papers of Implicit Reasoning in LLMs.☆22Updated 9 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆37Updated 5 months ago
- Building open-ended embodied agent in battle royale FPS game☆38Updated last year
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆31Updated 3 weeks ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆115Updated 4 months ago
- ☆398Updated 2 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆326Updated last year
- ☆213Updated 10 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆149Updated 10 months ago
- ☆202Updated 4 months ago
- The OlymMATH dataset☆21Updated 6 months ago
- [AI4MATH@ICML2025] Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs☆41Updated 7 months ago
- repo for paper https://arxiv.org/abs/2504.13837☆288Updated 5 months ago
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆92Updated last month
- ☆448Updated 2 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆141Updated last month
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- A comprehensive collection of process reward models.☆127Updated 2 months ago
- ☆19Updated 5 months ago