☆169Jul 15, 2025Updated 8 months ago
Alternatives and similar repositories for CacheBlend
Users that are interested in CacheBlend are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆154Oct 9, 2024Updated last year
- The driver for LMCache core to run in vLLM☆63Feb 4, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆792Apr 6, 2025Updated 11 months ago
- ☆99Nov 25, 2024Updated last year
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,745Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆20Jun 9, 2025Updated 9 months ago
- PipeRAG: Fast Retrieval-Augmented Generation via Algorithm-System Co-design (KDD 2025)☆31Jun 14, 2024Updated last year
- Efficient and easy multi-instance LLM serving☆536Mar 12, 2026Updated 2 weeks ago
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Nov 21, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆486Jan 8, 2026Updated 2 months ago
- ☆16Apr 15, 2025Updated 11 months ago
- The Official Implementation of Ada-KV [NeurIPS 2025]☆128Nov 26, 2025Updated 4 months ago
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- ☆311Jul 10, 2025Updated 8 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- A large-scale simulation framework for LLM inference☆564Jul 25, 2025Updated 8 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆377Jul 10, 2025Updated 8 months ago
- ☆12Mar 26, 2024Updated 2 years ago
- ☆131Nov 11, 2024Updated last year
- ☆47Jun 7, 2024Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆182Jul 10, 2024Updated last year
- ☆96Jan 22, 2026Updated 2 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- ☆46Mar 15, 2025Updated last year
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆419Mar 3, 2025Updated last year
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated last year
- ☆64Dec 3, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆286May 1, 2025Updated 10 months ago
- ☆85Apr 18, 2025Updated 11 months ago
- Modular and structured prompt caching for low-latency LLM inference☆109Nov 9, 2024Updated last year
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- C++ RPC based on RDMA☆13Sep 12, 2023Updated 2 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Query-Adaptive Vector Search☆70Mar 19, 2026Updated last week
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- ☆26Mar 31, 2022Updated 3 years ago
- Code repo for "CritiPrefill: A Segment-wise Criticality-based Approach for Prefilling Acceleration in LLMs".☆16Sep 15, 2024Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆532Feb 10, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Mar 20, 2026Updated last week
- ☆39Oct 16, 2025Updated 5 months ago