Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond
☆813Mar 17, 2026Updated last week
Alternatives and similar repositories for kvcached
Users that are interested in kvcached are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Research prototype of PRISM — a cost-efficient multi-LLM serving system with flexible time- and space-based GPU sharing.☆58Mar 17, 2026Updated last week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆404Updated this week
- Mako is a low-pause, high-throughput garbage collector designed for memory-disaggregated datacenters.☆15Sep 2, 2024Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Mar 20, 2026Updated last week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Efficient and easy multi-instance LLM serving☆536Mar 12, 2026Updated 2 weeks ago
- KV cache store for distributed LLM inference☆400Nov 13, 2025Updated 4 months ago
- NVIDIA Inference Xfer Library (NIXL)☆945Mar 20, 2026Updated last week
- UCCL is an efficient communication library for GPUs, covering collectives, P2P (e.g., KV cache transfer, RL weight transfer), and EP (e.g…☆1,240Mar 20, 2026Updated last week
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Mar 20, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆321Jun 10, 2025Updated 9 months ago
- Gateway API Inference Extension☆616Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆211Sep 21, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,745Updated this week
- High Performance KV Cache Store for LLM☆51Mar 20, 2026Updated last week
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆282Mar 6, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆950Oct 29, 2025Updated 4 months ago
- Stateful LLM Serving☆97Mar 11, 2025Updated last year
- A low-latency & high-throughput serving engine for LLMs☆486Jan 8, 2026Updated 2 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆792Apr 6, 2025Updated 11 months ago
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 11 months ago
- A lightweight, configurable, and real-time simulator designed to mimic the behavior of vLLM without the need for GPUs or running actual h…☆103Mar 19, 2026Updated last week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆119May 19, 2025Updated 10 months ago
- ☆10Sep 19, 2021Updated 4 years ago
- VQPy: An object-oriented approach to modern video analytics☆42Oct 28, 2024Updated last year
- ☆76Sep 15, 2025Updated 6 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated 2 weeks ago
- Fast OS-level support for GPU checkpoint and restore☆279Sep 28, 2025Updated 5 months ago
- ☆131Nov 11, 2024Updated last year
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,227Mar 19, 2026Updated last week
- An interference-aware scheduler for fine-grained GPU sharing☆161Nov 26, 2025Updated 4 months ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,403Mar 20, 2026Updated last week
- GLake: optimizing GPU memory management and IO transmission.☆498Mar 24, 2025Updated last year
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Dec 5, 2025Updated 3 months ago
- Serverless LLM Serving for Everyone.☆664Mar 6, 2026Updated 3 weeks ago
- 🧯 Kubernetes coverage for fault awareness and recovery, works for any LLMOps, MLOps, AI workloads.☆35Mar 14, 2026Updated last week
- A lightweight design for computation-communication overlap.☆225Jan 20, 2026Updated 2 months ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆105Dec 24, 2022Updated 3 years ago