[NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)
☆209Feb 11, 2026Updated last month
Alternatives and similar repositories for KVzip
Users that are interested in KVzip are comparing it to the libraries listed below
Sorting:
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 3 months ago
- [NeurIPS 2025 Spotlight] Implementation of "KLASS: KL-Guided Fast Inference in Masked Diffusion Models"☆25Jan 3, 2026Updated 2 months ago
- ☆38Oct 16, 2025Updated 5 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆668Feb 24, 2026Updated 3 weeks ago
- Residual Context Diffusion (RCD): Repurposing discarded signals as structured priors for high-performance reasoning in dLLMs.☆57Mar 12, 2026Updated last week
- ☆14Sep 11, 2025Updated 6 months ago
- [NAACL 2025🔥] MEDA: Dynamic KV Cache Allocation for Efficient Multimodal Long-Context Inference☆18Jun 19, 2025Updated 9 months ago
- KV cache compression for high-throughput LLM inference☆152Feb 5, 2025Updated last year
- ☆306Jul 10, 2025Updated 8 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆51Oct 18, 2024Updated last year
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆55Oct 9, 2025Updated 5 months ago
- ☆52May 13, 2024Updated last year
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Nov 9, 2024Updated last year
- Measuring Thinking Efficiency in Reasoning Models - Research Repository☆39Dec 2, 2025Updated 3 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆531Feb 10, 2025Updated last year
- ☆12Apr 4, 2024Updated last year
- ☆20Jun 1, 2025Updated 9 months ago
- Dynamic Context Selection for Efficient Long-Context LLMs☆56May 20, 2025Updated 10 months ago
- Official Implementation of "GRIFFIN: Effective Token Alignment for Faster Speculative Decoding"[NeurIPS 2025]☆18May 12, 2025Updated 10 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆45Apr 18, 2025Updated 11 months ago
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆31Aug 15, 2024Updated last year
- LLM KV cache compression made easy☆957Mar 13, 2026Updated last week
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 9 months ago
- Official Implementation for [ICLR26] DefensiveKV: Taming the Fragility of KV Cache Eviction in LLM Inference☆30Updated this week
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆39Jan 27, 2026Updated last month
- ☆17Aug 5, 2025Updated 7 months ago
- A simple, "Ollama-like" tool for managing and running GGUF language models from your terminal.☆23Jan 2, 2026Updated 2 months ago
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs☆82Jan 17, 2026Updated 2 months ago
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆43Jul 26, 2024Updated last year
- LLM-Powered Data Discovery System for Tabular Data☆24Jul 14, 2025Updated 8 months ago
- ☆229Nov 19, 2025Updated 4 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 10 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆142Mar 6, 2025Updated last year
- ☆39Oct 11, 2025Updated 5 months ago
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆21Mar 7, 2024Updated 2 years ago
- ☆10Aug 22, 2023Updated 2 years ago
- Oobabooga Text-Gen Web UI extension: get web content, add to context☆23Jun 1, 2024Updated last year
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆27Jul 7, 2025Updated 8 months ago
- ☆47Nov 8, 2024Updated last year