SlinkyProject / slurm-operatorLinks
Run Slurm on Kubernetes. A Slinky project.
☆198Updated last week
Alternatives and similar repositories for slurm-operator
Users that are interested in slurm-operator are comparing it to the libraries listed below
Sorting:
- Run Slurm in Kubernetes☆330Updated this week
- A Slurm cluster for Kubernetes☆66Updated last year
- Slurm in Kubernetes☆43Updated 3 weeks ago
- Run Slurm as a Kubernetes scheduler. A Slinky project.☆52Updated last week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆286Updated last week
- MIG Partition Editor for NVIDIA GPUs☆231Updated this week
- NVIDIA DRA Driver for GPUs☆504Updated this week
- ☆29Updated last month
- ☆184Updated last week
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆140Updated last week
- ☆172Updated last week
- ☆270Updated 2 weeks ago
- A Lustre container storage interface that allows Kubernetes to mount/unmount provisioned Lustre filesystems into containers.☆42Updated 3 weeks ago
- GPU plugin to the node feature discovery for Kubernetes☆308Updated last year
- K8s device plugin for GPU sharing☆99Updated 2 years ago
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆956Updated this week
- NVIDIA Network Operator☆302Updated last week
- GenAI inference performance benchmarking tool☆134Updated last week
- InterLink aims to provide an abstraction for the execution of a Kubernetes pod on any remote resource capable of managing a Container exe…☆95Updated this week
- This repo includes everything you need to know about deploying GPU nodes on OCI☆40Updated this week
- A toolkit for discovering cluster network topology.☆84Updated last week
- Kubernetes (k8s) device plugin to enable registration of AMD GPU to a container cluster☆358Updated last week
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆117Updated this week
- The NVIDIA GPU driver container allows the provisioning of the NVIDIA driver through the use of containers.☆146Updated this week
- ☆70Updated last week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆500Updated this week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆119Updated this week
- Run cloud native workloads on NVIDIA GPUs☆208Updated 2 months ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆624Updated this week
- NVIDIA k8s device plugin for Kubevirt☆268Updated last week