Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, TensorRT-LLM, and Triton
☆397Mar 18, 2026Updated this week
Alternatives and similar repositories for ome
Users that are interested in ome are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Rust reimplementation of genai-bench for benchmarking LLM serving systems at high concurrency with accurate timing and industry-standar…☆279Updated this week
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Dec 5, 2025Updated 3 months ago
- A workload for deploying LLM inference services on Kubernetes☆190Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆813Mar 17, 2026Updated last week
- Gateway API Inference Extension☆616Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆682Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆293Jan 26, 2026Updated last month
- A lightweight, configurable, and real-time simulator designed to mimic the behavior of vLLM without the need for GPUs or running actual h…☆103Updated this week
- NVIDIA Inference Xfer Library (NIXL)☆945Updated this week
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 11 months ago
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,657Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,347Updated this week
- WG Serving☆34Mar 5, 2026Updated 2 weeks ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,953Updated this week
- Materials for learning SGLang☆775Jan 5, 2026Updated 2 months ago
- 💫 A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSI…☆26Dec 6, 2024Updated last year
- Offline optimization of your disaggregated Dynamo graph☆227Updated this week
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,227Updated this week
- Kubernetes-native AI serving platform for scalable model serving.☆267Updated this week
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, i…☆12May 16, 2023Updated 2 years ago
- https://bbuf.github.io/gpu-glossary-zh/☆26Nov 7, 2025Updated 4 months ago
- Fast and memory-efficient exact attention☆21Mar 13, 2026Updated last week
- KV cache store for distributed LLM inference☆399Nov 13, 2025Updated 4 months ago
- Efficient and easy multi-instance LLM serving☆532Mar 12, 2026Updated last week
- NVIDIA DRA Driver for GPUs☆585Updated this week
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,682Updated this week
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,181Mar 17, 2026Updated last week
- The Intelligent Inference Scheduler for Large-scale Inference Services.☆64Feb 12, 2026Updated last month
- Kubernetes APIServer 高性能代理组件,代理 APIServer 的 List 请求,其它类型的请求会直接反向代理到原生 APIServer。 CKube 还额外支持了分页、搜索和索引等功能。 并且,CKube 100% 兼容原生 kubectl 和 ku…☆19Sep 16, 2022Updated 3 years ago
- Simplified Data Management and Sharing for Kubernetes☆18Mar 11, 2026Updated last week
- GenAI inference performance benchmarking tool☆156Mar 16, 2026Updated last week
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,745Updated this week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆736Updated this week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆925Feb 28, 2026Updated 3 weeks ago
- 🎉 An awesome & curated list of best LLMOps tools.☆215Mar 16, 2026Updated last week
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,165Feb 23, 2026Updated last month
- A toolkit to run Ray applications on Kubernetes☆2,388Updated this week
- Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes☆5,216Mar 17, 2026Updated last week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆935Updated this week