vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization
☆2,187Feb 27, 2026Updated this week
Alternatives and similar repositories for production-stack
Users that are interested in production-stack are comparing it to the libraries listed below
Sorting:
- Supercharge Your LLM with the Fastest KV Cache Layer☆6,923Updated this week
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,650Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,543Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆673Updated this week
- Gateway API Inference Extension☆594Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,787Updated this week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆380Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,009Feb 23, 2026Updated last week
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 5 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆880Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆898Updated this week
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,155Feb 23, 2026Updated last week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,938Updated this week
- A toolkit to run Ray applications on Kubernetes☆2,341Feb 23, 2026Updated last week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,618Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,919Updated this week
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,135Updated this week
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, o…☆9,478Updated this week
- A low-latency & high-throughput serving engine for LLMs☆480Jan 8, 2026Updated last month
- LLMPerf is a library for validating and benchmarking LLMs☆1,088Dec 9, 2024Updated last year
- My learning notes for ML SYS.☆5,444Jan 30, 2026Updated last month
- KV cache store for distributed LLM inference☆392Nov 13, 2025Updated 3 months ago
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,144Updated this week
- Large Language Model Text Generation Inference☆10,788Jan 8, 2026Updated last month
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,393Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆289Jan 26, 2026Updated last month
- Efficient Triton Kernels for LLM Training☆6,162Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,339Updated this week
- Perplexity GPU Kernels☆567Nov 7, 2025Updated 3 months ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,022Updated this week
- Materials for learning SGLang☆753Jan 5, 2026Updated last month
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆271Feb 20, 2026Updated last week
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,970May 15, 2025Updated 9 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,728May 21, 2025Updated 9 months ago