LLM KV cache compression made easy
β1,055Apr 23, 2026Updated last week
Alternatives and similar repositories for kvpress
Users that are interested in kvpress are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π° Must-read papers on KV Cache Compression (constantly updating π€).β694Apr 15, 2026Updated 2 weeks ago
- β311Jul 10, 2025Updated 9 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cacheβ387Nov 20, 2025Updated 5 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,207Apr 8, 2026Updated 3 weeks ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inferenceβ296May 1, 2025Updated 11 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- The Official Implementation of Ada-KV [NeurIPS 2025]β132Nov 26, 2025Updated 5 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generationβ253Dec 16, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Seβ¦β834Mar 6, 2025Updated last year
- FlashInfer: Kernel Library for LLM Servingβ5,498Updated this week
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Headsβ540Feb 10, 2025Updated last year
- KV cache compression for high-throughput LLM inferenceβ156Feb 5, 2025Updated last year
- [NeurIPS'23] H2O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models.β513Aug 1, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ381Jul 10, 2025Updated 9 months ago
- Awesome-LLM-KV-Cache: A curated list of πAwesome LLM KV Cache Papers with Codes.β429Mar 3, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- β47Nov 25, 2024Updated last year
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decodingβ279Aug 31, 2024Updated last year
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."β18Dec 13, 2024Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top ofβ¦β149Aug 9, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoringβ277Jul 6, 2025Updated 9 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inferenceβ168Oct 13, 2025Updated 6 months ago
- Efficient LLM Inference over Long Sequencesβ394Jun 25, 2025Updated 10 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.β1,061Sep 4, 2024Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMsβ391Apr 13, 2025Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI β’ AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Unified KV Cache Compression Methods for Auto-Regressive Modelsβ1,328Jan 4, 2025Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersβ133Dec 3, 2024Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layerβ8,132Updated this week
- Helpful tools and examples for working with flex-attentionβ1,179Apr 13, 2026Updated 2 weeks ago
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β993Feb 5, 2026Updated 2 months ago
- Distributed Compiler based on Triton for Parallel Systemsβ1,414Apr 22, 2026Updated last week
- [ICLR 2025π₯] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Modelsβ27Jul 7, 2025Updated 9 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.β50Oct 18, 2024Updated last year
- β66Apr 26, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Fast low-bit matmul kernels in Tritonβ445Apr 24, 2026Updated last week
- A throughput-oriented high-performance serving framework for LLMsβ954Mar 29, 2026Updated last month
- (ACL2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generationβ35May 28, 2025Updated 11 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,291Updated this week
- Awesome LLM compression research papers and tools.β1,824Feb 23, 2026Updated 2 months ago
- π Efficient implementations for emerging model architecturesβ4,999Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,928Updated this week