kubernetes-sigs / gateway-api-inference-extensionLinks
Gateway API Inference Extension
☆573Updated this week
Alternatives and similar repositories for gateway-api-inference-extension
Users that are interested in gateway-api-inference-extension are comparing it to the libraries listed below
Sorting:
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆656Updated this week
- NVIDIA DRA Driver for GPUs☆553Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆300Updated last week
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,095Updated this week
- GenAI inference performance benchmarking tool☆141Updated last week
- Inference scheduler for llm-d☆124Updated this week
- ☆209Updated this week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆161Updated this week
- NVSentinel is a cross-platform fault remediation service designed to rapidly remediate runtime node-level issues in GPU-accelerated compu…☆173Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆287Updated last week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆159Updated this week
- A federation scheduler for multi-cluster☆61Updated this week
- Example DRA driver that developers can fork and modify to get them started writing their own.☆114Updated last week
- Cloud Native Artifacial Intelligence Model Format Specification☆174Updated last week
- Kubernetes-native AI serving platform for scalable model serving.☆173Updated last week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆142Updated last week
- ☆191Updated 2 weeks ago
- Simplified model deployment on llm-d☆28Updated 7 months ago
- llm-d helm charts and deployment examples☆48Updated last month
- Controller for ModelMesh☆242Updated 7 months ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- GPU plugin to the node feature discovery for Kubernetes☆308Updated last year
- Node Resource Interface☆361Updated this week
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,429Updated this week
- Device plugins for Volcano, e.g. GPU☆131Updated 10 months ago
- ☆122Updated 3 years ago
- agent-sandbox enables easy management of isolated, stateful, singleton workloads, ideal for use cases like AI agent runtimes.☆825Updated this week
- ☆90Updated this week
- A workload for deploying LLM inference services on Kubernetes☆167Updated this week
- knavigator is a development, testing, and optimization toolkit for AI/ML scheduling systems at scale on Kubernetes.☆74Updated 6 months ago