ydyhello / TailorKVLinks
Official implementation of "TailorKV: A Hybrid Framework for Long-Context Inference via Tailored KV Cache Optimization" (Findings of ACL 2025).
☆18Updated 2 weeks ago
Alternatives and similar repositories for TailorKV
Users that are interested in TailorKV are comparing it to the libraries listed below
Sorting:
- ☆10Updated 11 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆107Updated last month
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆90Updated last month
- ☆54Updated last month
- ☆48Updated last month
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆33Updated 2 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆42Updated last year
- ICLR 2025☆27Updated 2 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆52Updated 5 months ago
- Self Reproduction Code of Paper "Reducing Transformer Key-Value Cache Size with Cross-Layer Attention (MIT CSAIL)☆17Updated last year
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 8 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆70Updated last month
- [ICML‘2024] "LoCoCo: Dropping In Convolutions for Long Context Compression", Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen☆17Updated 11 months ago
- ☆31Updated 3 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆48Updated 6 months ago
- ☆50Updated last year
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆32Updated 2 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆91Updated 8 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆38Updated 11 months ago
- A family of efficient edge language models in 100M~1B sizes.☆15Updated 5 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆25Updated 9 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆97Updated last year
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆77Updated last month
- JudgeLRM: Large Reasoning Models as a Judge☆32Updated 3 months ago
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆16Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆82Updated 6 months ago
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆23Updated 4 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆33Updated last year