jiwonsong-dev / ReasoningPathCompressionLinks
[NeurIPS 2025] Official implementation of "Reasoning Path Compression: Compressing Generation Trajectories for Efficient LLM Reasoning"
☆28Updated 2 months ago
Alternatives and similar repositories for ReasoningPathCompression
Users that are interested in ReasoningPathCompression are comparing it to the libraries listed below
Sorting:
- JudgeLRM: Large Reasoning Models as a Judge☆40Updated last month
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆23Updated 3 months ago
- ☆19Updated last year
- ☆72Updated 6 months ago
- Official Implementation of FastKV: Decoupling of Context Reduction and KV Cache Compression for Prefill-Decoding Acceleration☆29Updated last month
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆43Updated 10 months ago
- ☆50Updated 11 months ago
- dParallel: Learnable Parallel Decoding for dLLMs☆53Updated 3 months ago
- ☆17Updated 5 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆41Updated 2 weeks ago
- Official implementation of paper "Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models"☆58Updated 3 weeks ago
- The official code repository for the paper "Mirage or Method? How Model–Task Alignment Induces Divergent RL Conclusions".☆15Updated 4 months ago
- ☆61Updated 7 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆61Updated 10 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆55Updated last year
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆33Updated 7 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆29Updated last year
- ☆11Updated last year
- ☆62Updated 6 months ago
- ☆109Updated 4 months ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Updated 2 months ago
- ☆19Updated 10 months ago
- ☆85Updated 2 months ago
- ☆15Updated last year
- PyTorch implementation of StableMask (ICML'24)☆15Updated last year
- ☆55Updated 3 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆55Updated last week
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆113Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆89Updated 11 months ago