[VLDB 26, NeurIPS 25] Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.
☆133Feb 22, 2026Updated last month
Alternatives and similar repositories for RetrievalAttention
Users that are interested in RetrievalAttention are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆27Feb 26, 2026Updated last month
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆291May 1, 2025Updated 11 months ago
- Retrieval with Learned Similarities (http://arxiv.org/abs/2407.15462, WWW'25 Oral)☆51Apr 23, 2025Updated 11 months ago
- This is the implementation repository of our SOSP'24 paper: Aceso: Achieving Efficient Fault Tolerance in Memory-Disaggregated Key-Value …☆24Oct 20, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- Query-Adaptive Vector Search☆71Mar 19, 2026Updated 3 weeks ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆34Nov 29, 2024Updated last year
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆84Dec 7, 2025Updated 4 months ago
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆251Dec 16, 2024Updated last year
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆91Jun 16, 2025Updated 9 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆380Jul 10, 2025Updated 9 months ago
- ☆16Apr 15, 2025Updated 11 months ago
- Fast and memory-efficient exact attention☆20Mar 13, 2026Updated 3 weeks ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆18Mar 11, 2025Updated last year
- Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding☆97Dec 2, 2025Updated 4 months ago
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025☆31Oct 22, 2025Updated 5 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆52Dec 17, 2024Updated last year
- NVIDIA cuTile learn☆166Dec 9, 2025Updated 4 months ago
- An experimental parallel training platform☆56Mar 25, 2024Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆118Mar 13, 2024Updated 2 years ago
- ☆24May 9, 2025Updated 11 months ago
- This is the implementation repository of our SOSP'24 paper: CHIME: A Cache-Efficient and High-Performance Hybrid Index on Disaggregated M…☆28Nov 7, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆52May 19, 2025Updated 10 months ago
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 2 months ago
- ☆32Jul 2, 2025Updated 9 months ago
- A sparse attention kernel supporting mix sparse patterns☆495Jan 18, 2026Updated 2 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆91Nov 29, 2025Updated 4 months ago
- ☆119May 19, 2025Updated 10 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 5 months ago
- The Artifact Evaluation Version of SOSP Paper #19☆54Aug 19, 2024Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆213Sep 21, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- MSVBASE is a system that efficiently supports complex queries of both approximate similarity search and relational operators. It integrat…☆103Nov 19, 2024Updated last year
- ☆239Nov 19, 2025Updated 4 months ago
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- ☆32Sep 13, 2025Updated 6 months ago
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆17Jul 10, 2025Updated 9 months ago
- ☆44Oct 15, 2025Updated 5 months ago
- ☆38Aug 7, 2025Updated 8 months ago