abdelfattah-lab / SplitReasonLinks
☆21Updated 2 months ago
Alternatives and similar repositories for SplitReason
Users that are interested in SplitReason are comparing it to the libraries listed below
Sorting:
- [ICML'25] Official code of paper "Fast Large Language Model Collaborative Decoding via Speculation"☆28Updated 7 months ago
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆95Updated 3 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆132Updated 9 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆134Updated last week
- ☆46Updated 4 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆91Updated 11 months ago
- ☆43Updated 5 months ago
- [ICLR 2026] PSFT is a trust-region–inspired fine-tuning objective that views SFT as a policy gradient method with constant advantages, co…☆34Updated 5 months ago
- ☆47Updated 4 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆55Updated last year
- ☆178Updated 2 months ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆87Updated 10 months ago
- ☆59Updated 3 weeks ago
- ☆21Updated last year
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Updated last month
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated last year
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆84Updated 3 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆98Updated 3 months ago
- ☆145Updated 4 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆62Updated 7 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆32Updated 8 months ago
- ☆72Updated 7 months ago
- ☆27Updated last year
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 6 months ago
- The official repository of paper "Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models''☆110Updated 5 months ago
- ☆15Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- ☆134Updated 2 weeks ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆62Updated 4 months ago