mathllm / Step-Controlled_DPOLinks
☆23Updated last year
Alternatives and similar repositories for Step-Controlled_DPO
Users that are interested in Step-Controlled_DPO are comparing it to the libraries listed below
Sorting:
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- ☆47Updated 4 months ago
- ☆17Updated 6 months ago
- ☆51Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated 2 years ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆87Updated 10 months ago
- ☆30Updated last year
- [ICML 2025] Beyond Bradley-Terry Models: A General Preference Model for Language Model Alignment (https://arxiv.org/abs/2410.02197)☆39Updated 4 months ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆25Updated 9 months ago
- A Dynamic Visual Benchmark for Evaluating Mathematical Reasoning Robustness of Vision Language Models☆27Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Updated 3 months ago
- ☆21Updated last year
- The code and data for the paper JiuZhang3.0☆49Updated last year
- ☆72Updated 7 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆57Updated 3 weeks ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 11 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆120Updated 8 months ago
- JudgeLRM: Large Reasoning Models as a Judge☆40Updated last month
- Official code implementation for the ACL 2025 paper: 'Dynamic Scaling of Unit Tests for Code Reward Modeling'☆27Updated 8 months ago
- ☆23Updated last year
- ☆50Updated 11 months ago
- ☆16Updated last year
- ☆45Updated last month
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 8 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆58Updated last year
- Sotopia-RL: Reward Design for Social Intelligence☆46Updated this week