☆165Jul 15, 2025Updated 7 months ago
Alternatives and similar repositories for CacheBlend
Users that are interested in CacheBlend are comparing it to the libraries listed below
Sorting:
- ☆150Oct 9, 2024Updated last year
- The driver for LMCache core to run in vLLM☆61Feb 4, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- ☆20Jun 9, 2025Updated 8 months ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆30Jun 14, 2024Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,272Updated this week
- A low-latency & high-throughput serving engine for LLMs☆482Jan 8, 2026Updated last month
- Efficient and easy multi-instance LLM serving☆528Sep 3, 2025Updated 6 months ago
- ☆16Apr 15, 2025Updated 10 months ago
- ☆131Nov 11, 2024Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 3 months ago
- ☆95Nov 25, 2024Updated last year
- A large-scale simulation framework for LLM inference☆545Jul 25, 2025Updated 7 months ago
- ☆45Jun 7, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆374Jul 10, 2025Updated 7 months ago
- Stateful LLM Serving☆97Mar 11, 2025Updated 11 months ago
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆74Sep 15, 2025Updated 5 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆179Jul 10, 2024Updated last year
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated 11 months ago
- ☆302Jul 10, 2025Updated 7 months ago
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆464May 30, 2025Updated 9 months ago
- Query-Adaptive Vector Search☆69Feb 13, 2026Updated 3 weeks ago
- ☆87Jan 22, 2026Updated last month
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆51Oct 18, 2024Updated last year
- ☆34Jun 22, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 10 months ago
- A record of reading list on some MLsys popular topic☆22Mar 20, 2025Updated 11 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆83Dec 7, 2025Updated 2 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆527Feb 10, 2025Updated last year
- ☆64Dec 3, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆161Oct 13, 2025Updated 4 months ago
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆414Mar 3, 2025Updated last year
- ☆13Mar 26, 2024Updated last year
- ☆85Apr 18, 2025Updated 10 months ago
- Modular and structured prompt caching for low-latency LLM inference☆109Nov 9, 2024Updated last year
- ☆26Mar 31, 2022Updated 3 years ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year