ZJU-REAL / Self-Braking-TuningLinks
[NeurIPS 2025] Code for Let LLMs Break Free from Overthinking via Self-Braking Tuning. https://arxiv.org/abs/2505.14604
☆49Updated last month
Alternatives and similar repositories for Self-Braking-Tuning
Users that are interested in Self-Braking-Tuning are comparing it to the libraries listed below
Sorting:
- ☆36Updated 3 weeks ago
- A Unified Framework for High-Performance and Extensible LLM Steering☆89Updated last week
- [NeurIPS 2025] Mind the Gap: Bridging Thought Leap for Improved CoT Tuning https://arxiv.org/abs/2505.14684☆42Updated last week
- This repository is the official implementation of TimeHC-RL (Distilabel (Data Generation) + TRL (SFT) + VeRL (GRPO)).☆48Updated 4 months ago
- ☆29Updated 2 months ago
- R1-Searcher++: Incentivizing the Dynamic Knowledge Acquisition of LLMs via Reinforcement Learning☆65Updated 5 months ago
- Code for Paper InftyThink: Breaking the Length Limits of Long-Context Reasoning in Large Language Models☆37Updated 3 months ago
- ☆45Updated 3 weeks ago
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆71Updated 4 months ago
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆47Updated last month
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆36Updated 9 months ago
- ☆38Updated 2 months ago
- ☆63Updated 4 months ago
- [NAACL 2025] Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs☆24Updated last year
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆77Updated last month
- Official Implementation of our paper "THOR: Tool-Integrated Hierarchical Optimization via RL for Mathematical Reasoning".☆27Updated last month
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆49Updated 5 months ago
- [EMNLP 2025] Distill Visual Chart Reasoning Ability from LLMs to MLLMs☆56Updated 2 months ago
- [NeurIPS'25 Spotlight] ARM: Adaptive Reasoning Model☆56Updated 3 weeks ago
- ☆23Updated last year
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆30Updated 2 months ago
- [EMNLP 2024 Findings] ProSA: Assessing and Understanding the Prompt Sensitivity of LLMs☆29Updated 5 months ago
- GSM8K-V: Can Vision Language Models Solve Grade School Math Word Problems in Visual Contexts☆34Updated last month
- ☆46Updated 8 months ago
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆15Updated last month
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated 10 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆42Updated 8 months ago
- ☆24Updated 2 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆46Updated 3 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆24Updated 2 months ago