[SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference
☆83Dec 7, 2025Updated 3 months ago
Alternatives and similar repositories for PQCache
Users that are interested in PQCache are comparing it to the libraries listed below
Sorting:
- ☆20Jun 1, 2025Updated 9 months ago
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- [ICLR2025 Spotlight] MagicPIG: LSH Sampling for Efficient LLM Generation☆251Dec 16, 2024Updated last year
- ☆13Aug 1, 2025Updated 7 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆182Jul 10, 2024Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆53Dec 17, 2024Updated last year
- A list of papers in the field of approximate nearest neighbor search on high-dimensional vectors.☆117Mar 2, 2026Updated 2 weeks ago
- ☆21Updated this week
- Official implementation for paper "Navigating Labels and Vectors: A Unified Approach to Filtered Approximate Nearest Neighbor Search"☆33Dec 21, 2024Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆51Oct 18, 2024Updated last year
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆89Nov 29, 2025Updated 3 months ago
- Source code for the paper: "Accelerating Graph Indexing for ANNS on Modern CPUs"☆34Nov 9, 2025Updated 4 months ago
- ☆306Jul 10, 2025Updated 8 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆376Jul 10, 2025Updated 8 months ago
- ClusterKV: Manipulating LLM KV Cache in Semantic Space for Recallable Compression (DAC'25)☆26Feb 26, 2026Updated 3 weeks ago
- Official implementation of "TailorKV: A Hybrid Framework for Long-Context Inference via Tailored KV Cache Optimization" (Findings of ACL …☆21Jul 25, 2025Updated 7 months ago
- ☆14Jan 20, 2025Updated last year
- [VLDB 26, NeurIPS 25] Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆133Feb 22, 2026Updated 3 weeks ago
- Query-Adaptive Vector Search☆69Updated this week
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆53Aug 6, 2025Updated 7 months ago
- Deft: A Scalable Tree Index for Disaggregated Memory☆23Apr 23, 2025Updated 10 months ago
- ☆17May 30, 2025Updated 9 months ago
- A low-latency, billion-scale, and updatable graph-based vector store on SSD.☆102Feb 4, 2026Updated last month
- ☆18Mar 11, 2025Updated last year
- A cross-modal vector index with fast construction on heterogeneous CPU-GPU environment. Published on DaMoN@SIGMOD 2025.☆16Jul 16, 2025Updated 8 months ago
- QJL: 1-Bit Quantized JL transform for KV Cache Quantization with Zero Overhead☆33Jan 27, 2025Updated last year
- ☆81Sep 4, 2024Updated last year
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated 11 months ago
- Rcmp: Reconstructing RDMA-based Memory Disaggregation via CXL☆62Dec 26, 2023Updated 2 years ago
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆396Apr 20, 2024Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 3 months ago
- ☆16Aug 9, 2025Updated 7 months ago
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- Segmented Code Adjustment Quantization (SAQ)☆18Sep 22, 2025Updated 5 months ago
- ☆211Updated this week
- ☆24Apr 4, 2024Updated last year
- Block-based Approximate Nearest Neighbor☆35Nov 1, 2021Updated 4 years ago
- GPU-accelerated vector query processing system that supports large vector datasets beyond GPU memory.☆40Mar 24, 2024Updated last year