☆177Jul 15, 2025Updated 9 months ago
Alternatives and similar repositories for CacheBlend
Users that are interested in CacheBlend are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆158Oct 9, 2024Updated last year
- The driver for LMCache core to run in vLLM☆64Feb 4, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆801Apr 6, 2025Updated last year
- ☆99Nov 25, 2024Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,969Updated this week
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆20Jun 9, 2025Updated 10 months ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- Efficient and easy multi-instance LLM serving☆543Mar 12, 2026Updated last month
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆491Jan 8, 2026Updated 3 months ago
- ☆17Apr 15, 2025Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆131Nov 26, 2025Updated 4 months ago
- Stateful LLM Serving☆98Mar 11, 2025Updated last year
- ☆309Jul 10, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- A large-scale simulation framework for LLM inference☆587Jul 25, 2025Updated 8 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆380Jul 10, 2025Updated 9 months ago
- ☆12Mar 26, 2024Updated 2 years ago
- ☆132Nov 11, 2024Updated last year
- ☆47Jun 7, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆478May 30, 2025Updated 10 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆182Jul 10, 2024Updated last year
- ☆102Apr 6, 2026Updated last week
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆47Mar 15, 2025Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆427Mar 3, 2025Updated last year
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated last year
- ☆65Dec 3, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆294May 1, 2025Updated 11 months ago
- Modular and structured prompt caching for low-latency LLM inference☆112Nov 9, 2024Updated last year
- ☆84Apr 18, 2025Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- C++ RPC based on RDMA☆13Sep 12, 2023Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- ☆26Mar 31, 2022Updated 4 years ago
- Query-Adaptive Vector Search☆72Mar 19, 2026Updated last month
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆536Feb 10, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,122Updated this week
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆85Dec 7, 2025Updated 4 months ago