Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"
☆392Jan 19, 2025Updated last year
Alternatives and similar repositories for Step-DPO
Users that are interested in Step-DPO are comparing it to the libraries listed below
Sorting:
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆329Jan 29, 2026Updated last month
- ☆342Jun 5, 2025Updated 9 months ago
- ☆23Jul 5, 2024Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆694Jan 20, 2025Updated last year
- Unified Language-driven Zero-shot Domain Adaptation (CVPR 2024)☆17Nov 28, 2024Updated last year
- The code and data for the paper JiuZhang3.0☆49May 26, 2024Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆270Sep 12, 2024Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆948Feb 16, 2025Updated last year
- ☆16Jul 23, 2024Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆152Feb 14, 2025Updated last year
- Official Repo for Open-Reasoner-Zero☆2,086Jun 2, 2025Updated 9 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,837Jan 17, 2025Updated last year
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆121Dec 10, 2024Updated last year
- Scalable RL solution for advanced reasoning of language models☆1,821Mar 18, 2025Updated last year
- A series of technical report on Slow Thinking with LLM☆761Aug 13, 2025Updated 7 months ago
- ☆1,113Jan 10, 2026Updated 2 months ago
- ☆321Sep 18, 2024Updated last year
- ☆968Jan 23, 2025Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,866Aug 11, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,521Apr 24, 2025Updated 10 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,191Updated this week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Mar 21, 2025Updated last year
- Simple RL training for reasoning☆3,841Dec 23, 2025Updated 2 months ago
- O1 Replication Journey☆1,999Jan 14, 2025Updated last year
- ☆30Dec 27, 2024Updated last year
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆38Jul 25, 2024Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,102Jun 1, 2023Updated 2 years ago
- ☆51Oct 28, 2024Updated last year
- ☆83Apr 18, 2024Updated last year
- ☆31Mar 24, 2023Updated 2 years ago
- RL Scaling and Test-Time Scaling (ICML'25)☆115Jan 23, 2025Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆187May 20, 2025Updated 10 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆591Dec 9, 2024Updated last year
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆20Nov 21, 2024Updated last year
- (NeurlPS 2022) Spatial Pruned Sparse Convolution for Efficient 3D Object Detection☆65Jan 6, 2023Updated 3 years ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Jan 14, 2025Updated last year
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Nov 29, 2024Updated last year
- MAT: Mask-Aware Transformer for Large Hole Image Inpainting☆17Apr 1, 2022Updated 3 years ago
- (ICCV2023) IST-Net: Prior-free Category-level Pose Estimation with Implicit Space Transformation☆120Dec 7, 2023Updated 2 years ago