Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond
☆846Apr 7, 2026Updated last week
Alternatives and similar repositories for kvcached
Users that are interested in kvcached are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆59Mar 17, 2026Updated last month
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆422Updated this week
- Mako is a low-pause, high-throughput garbage collector designed for memory-disaggregated datacenters.☆15Sep 2, 2024Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,071Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆474May 30, 2025Updated 10 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- NVIDIA Inference Xfer Library (NIXL)☆970Apr 10, 2026Updated last week
- Efficient and easy multi-instance LLM serving☆543Mar 12, 2026Updated last month
- KV cache store for distributed LLM inference☆405Nov 13, 2025Updated 5 months ago
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,280Apr 10, 2026Updated last week
- AgentCgroup: Understanding and Controlling OS Resources of AI Agents☆41Mar 7, 2026Updated last month
- A Datacenter Scale Distributed Inference Serving Framework☆6,527Apr 10, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,372Updated this week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆323Jun 10, 2025Updated 10 months ago
- Gateway API Inference Extension☆639Updated this week
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,969Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆213Sep 21, 2024Updated last year
- High Performance KV Cache Store for LLM☆53Apr 6, 2026Updated last week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Mar 6, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆952Mar 29, 2026Updated 2 weeks ago
- Stateful LLM Serving☆98Mar 11, 2025Updated last year
- A low-latency & high-throughput serving engine for LLMs☆491Jan 8, 2026Updated 3 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆801Apr 6, 2025Updated last year
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- A lightweight, configurable, and real-time simulator designed to mimic the behavior of vLLM without the need for GPUs or running actual h…☆113Apr 9, 2026Updated last week
- ☆10Sep 19, 2021Updated 4 years ago
- ☆119May 19, 2025Updated 10 months ago
- VQPy: An object-oriented approach to modern video analytics☆42Oct 28, 2024Updated last year
- ☆79Sep 15, 2025Updated 7 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,403Updated this week
- Fast OS-level support for GPU checkpoint and restore☆280Sep 28, 2025Updated 6 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,478Apr 10, 2026Updated last week
- ☆132Nov 11, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,267Apr 9, 2026Updated last week
- An interference-aware scheduler for fine-grained GPU sharing☆162Nov 26, 2025Updated 4 months ago
- GLake: optimizing GPU memory management and IO transmission.☆501Mar 24, 2025Updated last year
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Dec 5, 2025Updated 4 months ago
- Serverless LLM Serving for Everyone.☆670Mar 6, 2026Updated last month
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Mar 31, 2026Updated 2 weeks ago
- A lightweight design for computation-communication overlap.☆226Jan 20, 2026Updated 2 months ago