Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond
☆796Feb 27, 2026Updated last week
Alternatives and similar repositories for kvcached
Users that are interested in kvcached are comparing it to the libraries listed below
Sorting:
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆384Updated this week
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆58Aug 15, 2025Updated 6 months ago
- KV cache store for distributed LLM inference☆396Nov 13, 2025Updated 3 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,224Feb 28, 2026Updated last week
- Efficient and easy multi-instance LLM serving☆528Sep 3, 2025Updated 6 months ago
- NVIDIA Inference Xfer Library (NIXL)☆898Feb 28, 2026Updated last week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆464May 30, 2025Updated 9 months ago
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Feb 28, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- Stateful LLM Serving☆97Mar 11, 2025Updated 11 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆210Sep 21, 2024Updated last year
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆315Jun 10, 2025Updated 8 months ago
- Mako is a low-pause, high-throughput garbage collector designed for memory-disaggregated datacenters.☆15Sep 2, 2024Updated last year
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Dec 5, 2025Updated 3 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆24Sep 23, 2025Updated 5 months ago
- Gateway API Inference Extension☆597Updated this week
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated 11 months ago
- A lightweight design for computation-communication overlap.☆223Jan 20, 2026Updated last month
- A throughput-oriented high-performance serving framework for LLMs☆947Oct 29, 2025Updated 4 months ago
- Fast OS-level support for GPU checkpoint and restore☆271Sep 28, 2025Updated 5 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆283Mar 6, 2025Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆778Apr 6, 2025Updated 11 months ago
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,272Updated this week
- A low-latency & high-throughput serving engine for LLMs☆482Jan 8, 2026Updated last month
- High Performance KV Cache Store for LLM☆47Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,187Feb 27, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago
- ☆32Jul 2, 2025Updated 8 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆105Dec 24, 2022Updated 3 years ago
- ☆97Mar 26, 2025Updated 11 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,284Feb 28, 2026Updated last week
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,650Feb 27, 2026Updated last week
- ☆160Dec 27, 2024Updated last year
- Perplexity GPU Kernels☆567Nov 7, 2025Updated 4 months ago
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,543Updated this week
- Offline optimization of your disaggregated Dynamo graph☆195Updated this week