LLM KV cache compression made easy
β1,021Apr 9, 2026Updated this week
Alternatives and similar repositories for kvpress
Users that are interested in kvpress are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π° Must-read papers on KV Cache Compression (constantly updating π€).β679Feb 24, 2026Updated last month
- β310Jul 10, 2025Updated 9 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ381Nov 20, 2025Updated 4 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,202Mar 9, 2026Updated last month
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ291May 1, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling on Cloudways β’ AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- The Official Implementation of Ada-KV [NeurIPS 2025]β131Nov 26, 2025Updated 4 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generationβ251Dec 16, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β822Mar 6, 2025Updated last year
- FlashInfer: Kernel Library for LLM Servingβ5,273Apr 4, 2026Updated last week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ535Feb 10, 2025Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ380Jul 10, 2025Updated 9 months ago
- KV cache compression for high-throughput LLM inferenceβ155Feb 5, 2025Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β510Aug 1, 2024Updated last year
- Awesome-LLM-KV-Cache: A curated list of πAwesome LLM KV Cache Papers with Codes.β426Mar 3, 2025Updated last year
- NordVPN Special Discount Offer β’ AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- β47Nov 25, 2024Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ279Aug 31, 2024Updated last year
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."β18Dec 13, 2024Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top ofβ¦β149Aug 9, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ276Jul 6, 2025Updated 9 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ168Oct 13, 2025Updated 5 months ago
- Efficient LLM Inference over Long Sequencesβ393Jun 25, 2025Updated 9 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β1,048Sep 4, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMsβ390Apr 13, 2025Updated 11 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI β’ AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,319Jan 4, 2025Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ132Dec 3, 2024Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layerβ7,900Updated this week
- Helpful tools and examples for working with flex-attentionβ1,167Apr 1, 2026Updated last week
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β982Feb 5, 2026Updated 2 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,401Mar 11, 2026Updated 3 weeks ago
- [ICLR 2025π₯] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Modelsβ27Jul 7, 2025Updated 9 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.β50Oct 18, 2024Updated last year
- β65Apr 26, 2025Updated 11 months ago
- DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Fast low-bit matmul kernels in Tritonβ443Updated this week
- A throughput-oriented high-performance serving framework for LLMsβ953Mar 29, 2026Updated last week
- (ACL2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generationβ34May 28, 2025Updated 10 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,256Apr 3, 2026Updated last week
- Awesome LLM compression research papers and tools.β1,796Feb 23, 2026Updated last month
- π Efficient implementations for emerging model architecturesβ4,823Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,478Updated this week