RLHFlow / Online-DPO-R1Links
Codebase for Iterative DPO Using Rule-based Rewards
☆257Updated 5 months ago
Alternatives and similar repositories for Online-DPO-R1
Users that are interested in Online-DPO-R1 are comparing it to the libraries listed below
Sorting:
- ☆318Updated last month
- ☆870Updated last week
- A scalable, end-to-end training pipeline for general-purpose agents☆359Updated 3 months ago
- ✨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framwork☆274Updated 3 weeks ago
- adds Sequence Parallelism into LLaMA-Factory☆564Updated last week
- ☆234Updated 4 months ago
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"☆551Updated 2 months ago
- [NeurIPS 2025🔥]Main source code of SRPO framework.☆105Updated 2 weeks ago
- SDAR (Synergy of Diffusion and AutoRegression), a large diffusion language model(1.7B, 4B, 8B, 30B)☆222Updated 2 weeks ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆137Updated 6 months ago
- R1-like Computer-use Agent☆85Updated 6 months ago
- ☆119Updated 2 weeks ago
- A library for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories.☆164Updated 2 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆224Updated 3 months ago
- [NIPS'25 Spotlight] Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,214Updated 2 weeks ago
- The official implementation of Self-Play Preference Optimization (SPPO)☆583Updated 8 months ago
- Recipes to train the self-rewarding reasoning LLMs.☆226Updated 7 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆180Updated 11 months ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆261Updated 4 months ago
- [ICLR 2025] Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models