NVIDIA / KAI-SchedulerLinks
KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale
☆756Updated this week
Alternatives and similar repositories for KAI-Scheduler
Users that are interested in KAI-Scheduler are comparing it to the libraries listed below
Sorting:
- NVIDIA DRA Driver for GPUs☆413Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆546Updated last week
- Gateway API Inference Extension☆451Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆251Updated 2 weeks ago
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆124Updated this week
- A toolkit for discovering cluster network topology.☆63Updated this week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆141Updated this week
- MIG Partition Editor for NVIDIA GPUs☆209Updated last week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆491Updated last week
- GenAI inference performance benchmarking tool☆76Updated this week
- ☆139Updated last month
- Controller for ModelMesh☆239Updated 2 months ago
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆239Updated last week
- Run Slurm in Kubernetes☆272Updated this week
- GPU plugin to the node feature discovery for Kubernetes☆303Updated last year
- A federation scheduler for multi-cluster☆48Updated 2 months ago
- ☆289Updated this week
- Practical GPU Sharing Without Memory Size Constraints☆281Updated 4 months ago
- Kubernetes-native Job Queueing☆1,953Updated this week
- llm-d is a Kubernetes-native high-performance distributed LLM inference framework☆1,621Updated this week
- Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elas…☆667Updated last year
- HAMi-core compiles libvgpu.so, which ensures hard limit on GPU in container☆199Updated last week
- ☆163Updated last week
- Run Slurm on Kubernetes. A Slinky project.☆153Updated this week
- Example DRA driver that developers can fork and modify to get them started writing their own.☆87Updated 3 weeks ago
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,045Updated this week
- Holistic job manager on Kubernetes☆116Updated last year
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆205Updated last year
- NVIDIA Network Operator☆272Updated this week
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆413Updated this week