Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, TensorRT-LLM, and Triton
☆380Feb 25, 2026Updated this week
Alternatives and similar repositories for ome
Users that are interested in ome are comparing it to the libraries listed below
Sorting:
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆271Feb 20, 2026Updated last week
- Following the same workflows as Kubernetes. Widely used in InftyAI community.☆13Dec 5, 2025Updated 2 months ago
- A workload for deploying LLM inference services on Kubernetes☆171Feb 18, 2026Updated 2 weeks ago
- Gateway API Inference Extension☆597Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆673Updated this week
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆796Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆289Jan 26, 2026Updated last month
- A light weight vLLM simulator, for mocking out replicas.☆87Updated this week
- 💫 A lightweight p2p-based cache system for model distributions on Kubernetes. Reframing now to make it an unified cache system with POSI…☆25Dec 6, 2024Updated last year
- NVIDIA Inference Xfer Library (NIXL)☆898Updated this week
- An Envoy inspired, ultimate LLM-first gateway for LLM serving and downstream application developers and enterprises☆26Apr 24, 2025Updated 10 months ago
- Offline optimization of your disaggregated Dynamo graph☆195Updated this week
- The main purpose of runtime copilot is to assist with node runtime management tasks such as configuring registries, upgrading versions, i…☆12May 16, 2023Updated 2 years ago
- GenAI inference performance benchmarking tool☆151Updated this week
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,543Updated this week
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Updated this week
- KV cache store for distributed LLM inference☆392Nov 13, 2025Updated 3 months ago
- https://bbuf.github.io/gpu-glossary-zh/☆26Nov 7, 2025Updated 3 months ago
- Fast and memory-efficient exact attention☆18Updated this week
- vLLM’s reference system for K8S-native cluster-wide deployment with community-driven performance optimization☆2,187Updated this week
- WG Serving☆34Dec 15, 2025Updated 2 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- Materials for learning SGLang☆753Jan 5, 2026Updated last month
- Kubernetes-native AI serving platform for scalable model serving.☆233Updated this week
- NVIDIA DRA Driver for GPUs☆574Updated this week
- The Intelligent Inference Scheduler for Large-scale Inference Services.☆64Feb 12, 2026Updated 2 weeks ago
- Cost-efficient and pluggable Infrastructure components for GenAI inference☆4,650Updated this week
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,158Feb 23, 2026Updated last week
- 🎉 An awesome & curated list of best LLMOps tools.☆204Feb 4, 2026Updated last month
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,144Updated this week
- Efficient and easy multi-instance LLM serving☆527Sep 3, 2025Updated 6 months ago
- Example DRA driver that developers can fork and modify to get them started writing their own.☆120Feb 23, 2026Updated last week
- Supercharge Your LLM with the Fastest KV Cache Layer☆7,272Updated this week
- ☆194Jan 20, 2026Updated last month
- A toolkit to run Ray applications on Kubernetes☆2,355Updated this week
- ☆286Feb 25, 2026Updated last week
- Kubernetes APIServer 高性能代理组件,代理 APIServer 的 List 请求,其它类型的请求会直接反向代理到原生 APIServer。 CKube 还额外支持了分页、搜索和索引等功能。 并且,CKube 100% 兼容原生 kubectl 和 ku…☆19Sep 16, 2022Updated 3 years ago
- JobSet: a k8s native API for distributed ML training and HPC workloads☆317Updated this week
- ☆337Feb 22, 2026Updated last week