☆182Jul 15, 2025Updated 9 months ago
Alternatives and similar repositories for CacheBlend
Users that are interested in CacheBlend are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆158Oct 9, 2024Updated last year
- The driver for LMCache core to run in vLLM☆64Feb 4, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆807Apr 6, 2025Updated last year
- ☆99Nov 25, 2024Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layer☆8,187Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆21Jun 9, 2025Updated 11 months ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆32Jun 14, 2024Updated last year
- ☆47Jun 7, 2024Updated last year
- Efficient and easy multi-instance LLM serving☆547Mar 12, 2026Updated last month
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆496Jan 8, 2026Updated 4 months ago
- ☆17Apr 15, 2025Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆132Nov 26, 2025Updated 5 months ago
- Stateful LLM Serving☆99Mar 11, 2025Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- ☆313Jul 10, 2025Updated 9 months ago
- Accurate, large-scale, and extensible simulator for LLM inference Systems☆595Jul 25, 2025Updated 9 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆384Jul 10, 2025Updated 9 months ago
- ☆11Mar 26, 2024Updated 2 years ago
- ☆133Nov 11, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆482May 30, 2025Updated 11 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆184Jul 10, 2024Updated last year
- ☆108Apr 23, 2026Updated 2 weeks ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆47Mar 15, 2025Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆430Mar 3, 2025Updated last year
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated 2 years ago
- ☆66Dec 3, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆297May 1, 2025Updated last year
- Modular and structured prompt caching for low-latency LLM inference☆112Nov 9, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- ☆85Apr 18, 2025Updated last year
- C++ RPC based on RDMA☆13Sep 12, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- ☆26Mar 31, 2022Updated 4 years ago
- Query-Adaptive Vector Search☆72Mar 19, 2026Updated last month
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆17Sep 15, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆543Feb 10, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,276Updated this week
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆88Dec 7, 2025Updated 5 months ago