dvlab-research / Step-DPOLinks
Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"
☆372Updated 5 months ago
Alternatives and similar repositories for Step-DPO
Users that are interested in Step-DPO are comparing it to the libraries listed below
Sorting:
- The related works and background techniques about Openai o1☆223Updated 6 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆318Updated 11 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated 10 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆652Updated 5 months ago
- ☆337Updated last month
- A series of technical report on Slow Thinking with LLM☆708Updated last month
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆128Updated 2 weeks ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆237Updated 2 weeks ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆463Updated 8 months ago
- ☆205Updated 4 months ago
- Official Repository of "Learning to Reason under Off-Policy Guidance"☆249Updated last month
- A comprehensive collection of process reward models.☆95Updated 2 weeks ago