sgl-project / omeLinks
Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, TensorRT-LLM, and Triton
☆365Updated this week
Alternatives and similar repositories for ome
Users that are interested in ome are comparing it to the libraries listed below
Sorting:
- Virtualized Elastic KV Cache for Dynamic GPU Sharing and Beyond☆773Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated this week
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆475Updated this week
- A workload for deploying LLM inference services on Kubernetes☆168Updated last week
- NVIDIA Inference Xfer Library (NIXL)☆876Updated this week
- KV cache store for distributed LLM inference☆390Updated 2 months ago
- NVIDIA NCCL Tests for Distributed Training☆134Updated 2 weeks ago
- Efficient and easy multi-instance LLM serving☆524Updated 5 months ago
- Offline optimization of your disaggregated Dynamo graph☆184Updated this week
- A high-performance and light-weight router for vLLM large scale deployment☆101Updated last week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆159Updated this week
- Distributed KV cache scheduling & offloading libraries☆101Updated last week
- A toolkit for discovering cluster network topology.☆96Updated this week
- The driver for LMCache core to run in vLLM☆60Updated last year
- A light weight vLLM simulator, for mocking out replicas.☆85Updated last week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆662Updated last week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆287Updated 2 weeks ago
- CUDA checkpoint and restore utility☆410Updated 4 months ago
- ☆322Updated last year
- Inference scheduler for llm-d☆127Updated this week
- Cloud Native Benchmarking of Foundation Models☆45Updated 6 months ago
- Materials for learning SGLang☆738Updated last month
- AIPerf is a comprehensive benchmarking tool that measures the performance of generative AI models served by your preferred inference solu…☆126Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆391Updated this week
- ☆280Updated this week
- Gateway API Inference Extension☆576Updated this week
- Kubernetes-native AI serving platform for scalable model serving.☆208Updated this week
- GenAI inference performance benchmarking tool☆142Updated last week
- NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated compu…☆177Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago