[VLDB 26, NeurIPS 25] Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.
☆133Feb 22, 2026Updated 3 weeks ago
Alternatives and similar repositories for RetrievalAttention
Users that are interested in RetrievalAttention are comparing it to the libraries listed below
Sorting:
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆26Feb 26, 2026Updated 3 weeks ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆286May 1, 2025Updated 10 months ago
- Retrieval with Learned Similarities (http://arxiv.org/abs/2407.15462, WWW'25 Oral)☆52Apr 23, 2025Updated 10 months ago
- This is the implementation repository of our SOSP'24 paper: Aceso: Achieving Efficient Fault Tolerance in Memory-Disaggregated Key-Value …☆24Oct 20, 2024Updated last year
- Query-Adaptive Vector Search☆69Updated this week
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆83Dec 7, 2025Updated 3 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆251Dec 16, 2024Updated last year
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆90Jun 16, 2025Updated 9 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆376Jul 10, 2025Updated 8 months ago
- ☆16Apr 15, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆20Mar 13, 2026Updated last week
- ☆18Mar 11, 2025Updated last year
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆96Dec 2, 2025Updated 3 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆53Dec 17, 2024Updated last year
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025☆31Oct 22, 2025Updated 4 months ago
- NVIDIA cuTile learn☆164Dec 9, 2025Updated 3 months ago
- An experimental parallel training platform☆56Mar 25, 2024Updated last year
- ☆24May 9, 2025Updated 10 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- This is the implementation repository of our SOSP'24 paper: CHIME: A Cache-Efficient and High-Performance Hybrid Index on Disaggregated M…☆28Nov 7, 2024Updated last year
- ☆52May 19, 2025Updated 10 months ago
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated last month
- ☆32Jul 2, 2025Updated 8 months ago
- A sparse attention kernel supporting mix sparse patterns☆480Jan 18, 2026Updated 2 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆89Nov 29, 2025Updated 3 months ago
- ☆119May 19, 2025Updated 10 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- The Artifact Evaluation Version of SOSP Paper #19☆54Aug 19, 2024Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- MSVBASE is a system that efficiently supports complex queries of both approximate similarity search and relational operators. It integrat…☆103Nov 19, 2024Updated last year
- ☆229Nov 19, 2025Updated 4 months ago
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated 11 months ago
- ☆31Sep 13, 2025Updated 6 months ago
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆16Jul 10, 2025Updated 8 months ago
- ☆41Oct 15, 2025Updated 5 months ago
- ☆38Aug 7, 2025Updated 7 months ago