NVIDIA / KAI-SchedulerLinks
KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale
☆869Updated this week
Alternatives and similar repositories for KAI-Scheduler
Users that are interested in KAI-Scheduler are comparing it to the libraries listed below
Sorting:
- NVIDIA DRA Driver for GPUs☆458Updated last week
- Gateway API Inference Extension☆501Updated this week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆601Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆268Updated last week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆71Updated 2 weeks ago
- GenAI inference performance benchmarking tool☆106Updated last week
- A toolkit for discovering cluster network topology.☆74Updated this week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆130Updated this week
- MIG Partition Editor for NVIDIA GPUs☆218Updated last week
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆1,907Updated this week
- Run Slurm in Kubernetes☆300Updated this week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆499Updated last week
- A federation scheduler for multi-cluster☆54Updated 4 months ago
- ☆152Updated this week
- Controller for ModelMesh☆237Updated 4 months ago
- Kubernetes-native Job Queueing☆2,031Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆261Updated last week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆150Updated last week
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆205Updated last year
- Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elas…☆672Updated last year
- llm-d helm charts and deployment examples☆45Updated 3 weeks ago
- ☆303Updated this week
- NVIDIA device plugin for Kubernetes☆48Updated last year
- GPU plugin to the node feature discovery for Kubernetes☆305Updated last year
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)