RLHFlow / Online-DPO-R1Links
Codebase for Iterative DPO Using Rule-based Rewards
☆252Updated 3 months ago
Alternatives and similar repositories for Online-DPO-R1
Users that are interested in Online-DPO-R1 are comparing it to the libraries listed below
Sorting:
- ✨ A synthetic dataset generation framework that produces diverse coding questions and verifiable solutions - all in one framwork☆238Updated last month
- adds Sequence Parallelism into LLaMA-Factory☆525Updated last week
- R1-like Computer-use Agent☆77Updated 3 months ago
- A scalable, end-to-end training pipeline for general-purpose agents☆258Updated last week
- The official implementation of Self-Play Preference Optimization (SPPO)☆569Updated 5 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆136Updated 3 months ago
- ☆216Updated 2 months ago
- ☆62Updated 4 months ago
- Mulberry, an o1-like Reasoning and Reflection MLLM Implemented via Collective MCTS☆1,201Updated 3 months ago
- [ICML 2025] "SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator"☆249Updated last week
- ☆195Updated this week
- Unified KV Cache Compression Methods for Auto-Regressive Models☆1,190Updated 6 months ago
- DeepRetrieval - 🔥 Training Search Agent with Retrieval Outcomes via Reinforcement Learning☆580Updated 3 weeks ago
- Recipes to train the self-rewarding reasoning LLMs.☆224Updated 4 months ago
- Train your Agent model via our easy and efficient framework☆1,258Updated last week
- [ICLR 2025] Vision-Centric Evaluation for Retrieval-Augmented Multimodal Models☆49Updated 5 months ago
- Official code of paper "Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models"☆79Updated last month
- Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasonin…☆169Updated 7 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆176Updated 8 months ago
- Benchmarking LLMs via Uncertainty Quantification☆234Updated last year
- The framework to prune LLMs to any size and any config.☆93Updated last year
- A recipe for online RLHF and online iterative DPO.☆521Updated 6 months ago
- [ACL'25] Code for "Aligning Large Language Models to Follow Instructions and Hallucinate Less via Effective Data Filtering"☆20Updated last month
- [ICLR 2025🔥] SVD-LLM & [NAACL 2025🔥] SVD-LLM V2☆231Updated 3 months ago
- ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code (https://arxiv.org/abs/2311.098…☆301Updated 2 weeks ago
- A library for generating difficulty-scalable, multi-tool, and verifiable agentic tasks with execution trajectories.☆110Updated last week
- [ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.☆169Updated last month
- [ICLR Workshop 2025] An official source code for paper "GuardReasoner: Towards Reasoning-based LLM Safeguards".☆148Updated last month
- This is the repo for the paper "OS Agents: A Survey on MLLM-based Agents for Computer, Phone and Browser Use" (ACL 2025 Oral).☆309Updated 2 weeks ago
- ☆45Updated 3 months ago