EIT-NLP / Distilling-CoT-ReasoningLinks
[ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".
☆19Updated 8 months ago
Alternatives and similar repositories for Distilling-CoT-Reasoning
Users that are interested in Distilling-CoT-Reasoning are comparing it to the libraries listed below
Sorting:
- ☆50Updated last year
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆46Updated 4 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆24Updated 2 months ago
- ☆23Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 10 months ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆23Updated 3 weeks ago
- ☆41Updated last month
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 3 months ago
- MathFusion: Enhancing Mathematical Problem-solving of LLM through Instruction Fusion (ACL 2025)☆31Updated 3 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆115Updated 6 months ago
- ☆21Updated 5 months ago
- ☆29Updated last month
- [EMNLP 2025] WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learning☆49Updated last week
- A Sober Look at Language Model Reasoning☆86Updated 3 weeks ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 6 months ago
- ☆17Updated 3 months ago
- Official Implementation for the paper "Integrative Decoding: Improving Factuality via Implicit Self-consistency"☆31Updated 6 months ago
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆26Updated last year
- Official repository of paper "Context-DPO: Aligning Language Models for Context-Faithfulness"☆19Updated 8 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆55Updated 5 months ago
- ☆38Updated 2 months ago
- [ICML 2025] Official code of "AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization"☆22Updated last year
- ☆14Updated 10 months ago
- Official Repository of LatentSeek☆65Updated 4 months ago
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆15Updated last month
- ☆15Updated 4 months ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆47Updated last month
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆78Updated last month
- ☆45Updated last month