TianduoWang / DPO-STLinks
[ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning
☆52Updated last year
Alternatives and similar repositories for DPO-ST
Users that are interested in DPO-ST are comparing it to the libraries listed below
Sorting:
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆105Updated last month
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- A Sober Look at Language Model Reasoning☆87Updated this week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆132Updated 8 months ago
- ☆53Updated 9 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 9 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆83Updated 7 months ago
- Directional Preference Alignment☆57Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆116Updated 6 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆47Updated 6 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆62Updated 11 months ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 10 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆33Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 4 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated 5 months ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆51Updated last year
- Exploration of automated dataset selection approaches at large scales.☆48Updated 8 months ago
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆76Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 8 months ago
- Benchmarking Benchmark Leakage in Large Language Models☆56Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆148Updated 4 months ago
- Code and models for EMNLP 2024 paper "WPO: Enhancing RLHF with Weighted Preference Optimization"☆41Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated last month
- Long Context Extension and Generalization in LLMs☆62Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86Updated 6 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year