[VLDB 26, NeurIPS 25] Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.
☆124Feb 22, 2026Updated last week
Alternatives and similar repositories for RetrievalAttention
Users that are interested in RetrievalAttention are comparing it to the libraries listed below
Sorting:
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆24Sep 15, 2025Updated 5 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆93Dec 2, 2025Updated 2 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆43Nov 19, 2025Updated 3 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 9 months ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆30Jun 14, 2024Updated last year
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆82Dec 7, 2025Updated 2 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆53Dec 17, 2024Updated last year
- This is the implementation repository of our SOSP'24 paper: Aceso: Achieving Efficient Fault Tolerance in Memory-Disaggregated Key-Value …☆22Oct 20, 2024Updated last year
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆250Dec 16, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- Query-Adaptive Vector Search☆68Feb 13, 2026Updated 2 weeks ago
- Fast and memory-efficient exact attention☆18Updated this week
- ☆24May 9, 2025Updated 9 months ago
- NVIDIA cuTile learn☆163Dec 9, 2025Updated 2 months ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- ☆19Jun 1, 2025Updated 8 months ago
- A sparse attention kernel supporting mix sparse patterns☆467Jan 18, 2026Updated last month
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆87Nov 29, 2025Updated 3 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆84Jun 16, 2025Updated 8 months ago
- Retrieval with Learned Similarities (http://arxiv.org/abs/2407.15462, WWW'25 Oral)☆52Apr 23, 2025Updated 10 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆48Jun 19, 2024Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Mar 13, 2024Updated last year
- ☆37Oct 11, 2025Updated 4 months ago
- ☆32Jul 2, 2025Updated 7 months ago
- ☆118May 19, 2025Updated 9 months ago
- ☆41Oct 15, 2025Updated 4 months ago
- Arya: Arbitrary Graph Pattern Mining with Decomposition-based Sampling☆16Sep 27, 2023Updated 2 years ago
- ☆30Sep 13, 2025Updated 5 months ago
- ☆52May 19, 2025Updated 9 months ago
- The Artifact Evaluation Version of SOSP Paper #19☆52Aug 19, 2024Updated last year
- Medusa: Accelerating Serverless LLM Inference with Materialization [ASPLOS'25]☆41May 13, 2025Updated 9 months ago
- 16-fold memory access reduction with nearly no loss☆109Mar 26, 2025Updated 11 months ago
- ☆38Aug 7, 2025Updated 6 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆129Jun 24, 2025Updated 8 months ago
- Prefix-Aware Attention for LLM Decoding☆29Jan 23, 2026Updated last month
- KV cache compression for high-throughput LLM inference☆154Feb 5, 2025Updated last year
- ☆131Nov 11, 2024Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year