WANGXinyiLinda / planning_tokensLinks
Official code for Guiding Language Model Math Reasoning with Planning Tokens
☆15Updated last year
Alternatives and similar repositories for planning_tokens
Users that are interested in planning_tokens are comparing it to the libraries listed below
Sorting:
- ☆58Updated last year
- ☆133Updated last month
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆42Updated 6 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆23Updated 9 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆120Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆86Updated 8 months ago
- A Sober Look at Language Model Reasoning☆84Updated this week
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆28Updated 3 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 3 months ago
- ☆67Updated 6 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 10 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆115Updated 10 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆112Updated last month
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆51Updated 11 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆82Updated 6 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆86Updated 4 months ago
- ☆13Updated last year
- ☆43Updated 2 weeks ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆114Updated 5 months ago
- ☆20Updated 10 months ago
- ☆18Updated 10 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆129Updated 6 months ago
- The official implementation for [NeurIPS2025 Oral] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink…☆89Updated 3 weeks ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 9 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 4 months ago
- ☆12Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style