WANGXinyiLinda / planning_tokensLinks
Official code for Guiding Language Model Math Reasoning with Planning Tokens
☆15Updated last year
Alternatives and similar repositories for planning_tokens
Users that are interested in planning_tokens are comparing it to the libraries listed below
Sorting:
- ☆126Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆81Updated 5 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆64Updated 3 weeks ago
- A Sober Look at Language Model Reasoning☆81Updated last month
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆39Updated 4 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆52Updated 5 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆28Updated 8 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆77Updated 2 months ago
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆83Updated 3 weeks ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- ☆117Updated 4 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 5 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆162Updated this week
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 8 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆22Updated 7 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆105Updated 3 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- ☆49Updated 3 weeks ago
- ☆59Updated 11 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆52Updated 2 months ago
- ☆18Updated last week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 4 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆125Updated 9 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆49Updated 9 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆111Updated 7 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆76Updated 4 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 9 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆58Updated 3 weeks ago
- ☆39Updated 3 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago