[NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)
☆217Feb 11, 2026Updated 2 months ago
Alternatives and similar repositories for KVzip
Users that are interested in KVzip are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NAACL 2025🔥] MEDA: Dynamic KV Cache Allocation for Efficient Multimodal Long-Context Inference☆20Jun 19, 2025Updated 10 months ago
- ☆41Oct 16, 2025Updated 6 months ago
- [NeurIPS 2025 Spotlight] Implementation of "KLASS: KL-Guided Fast Inference in Masked Diffusion Models"☆31Jan 3, 2026Updated 3 months ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆694Apr 15, 2026Updated 2 weeks ago
- ☆311Jul 10, 2025Updated 9 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆16Sep 11, 2025Updated 7 months ago
- Residual Context Diffusion (RCD): Repurposing discarded signals as structured priors for high-performance reasoning in dLLMs.☆56Mar 12, 2026Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 3 months ago
- Official Implementation of the paper "Jointly Reinforcing Diversity and Quality in Language Model Generations"☆58Apr 13, 2026Updated 2 weeks ago
- KV cache compression for high-throughput LLM inference☆156Feb 5, 2025Updated last year
- A lightweight chat interface for interacting with local models, featuring persistent memory using a seamless SQLite database to store you…☆34Sep 15, 2025Updated 7 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- IndexCache: Accelerating Sparse Attention via Cross-Layer Index Reuse☆91Mar 14, 2026Updated last month
- ☆53May 13, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆104Nov 9, 2024Updated last year
- Measuring Thinking Efficiency in Reasoning Models - Research Repository☆39Dec 2, 2025Updated 4 months ago
- A realtime speech to text diarization system to gather and interleave speech from multiple speaker audio.☆28Jan 29, 2026Updated 3 months ago
- Awesome AI Benchmarks☆30Jan 16, 2026Updated 3 months ago
- ☆12Apr 4, 2024Updated 2 years ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆540Feb 10, 2025Updated last year
- Dynamic Context Selection for Efficient Long-Context LLMs☆56May 20, 2025Updated 11 months ago
- ☆21Jun 1, 2025Updated 10 months ago
- Official Implementation of "GRIFFIN: Effective Token Alignment for Faster Speculative Decoding"[NeurIPS 2025]☆18May 12, 2025Updated 11 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ICCV 2025] Multi-Granular Spatio-Temporal Token Merging for Training-Free Acceleration of Video LLMs☆59Feb 2, 2026Updated 2 months ago
- Various LLM Benchmarks☆25Feb 20, 2026Updated 2 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆45Apr 18, 2025Updated last year
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆31Apr 13, 2026Updated 2 weeks ago
- Collection of papers about video-audio understanding☆25Dec 26, 2025Updated 4 months ago
- (ACL2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆35May 28, 2025Updated 11 months ago
- LLM KV cache compression made easy☆1,055Apr 23, 2026Updated last week
- ☆17Aug 5, 2025Updated 8 months ago
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs☆85Jan 17, 2026Updated 3 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- [ECCV 2024] Efficient Inference of Vision Instruction-Following Models with Elastic Cache☆43Jul 26, 2024Updated last year
- [CVPR 2026] Variation-aware Vision Token Dropping for Faster Large Vision-Language Models☆31Mar 18, 2026Updated last month
- LLM-Powered Data Discovery System for Tabular Data☆28Apr 7, 2026Updated 3 weeks ago
- ☆37Dec 30, 2025Updated 4 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆296May 1, 2025Updated 11 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆144Mar 6, 2025Updated last year
- ☆245Nov 19, 2025Updated 5 months ago