UCSB-NLP-Chang / ThinkPruneLinks
☆33Updated 2 months ago
Alternatives and similar repositories for ThinkPrune
Users that are interested in ThinkPrune are comparing it to the libraries listed below
Sorting:
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆47Updated last month
- The rule-based evaluation subset and code implementation of Omni-MATH☆22Updated 6 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆88Updated last month
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆60Updated 6 months ago
- The repository of the project "Fine-tuning Large Language Models with Sequential Instructions", code base comes from open-instruct and LA…☆29Updated 7 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆45Updated 7 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆37Updated last week
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆28Updated 3 weeks ago
- instruction-following benchmark for large reasoning models☆33Updated 3 weeks ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆32Updated 9 months ago
- ☆85Updated 2 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated last month
- ☆13Updated 11 months ago
- Revisiting Mid-training in the Era of RL Scaling☆56Updated 2 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 4 months ago
- A Sober Look at Language Model Reasoning☆74Updated last week
- Model merging is a highly efficient approach for long-to-short reasoning.☆65Updated 3 weeks ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 6 months ago
- ☆15Updated 6 months ago
- LightThinker: Thinking Step-by-Step Compression☆59Updated 2 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆88Updated 8 months ago
- ☆46Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆73Updated 4 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 2 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 7 months ago
- ☆29Updated last year
- ☆65Updated 2 months ago
- ☆59Updated 9 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆156Updated 3 months ago
- ☆14Updated 8 months ago