nebuly-ai / nosLinks
Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas - Effortless optimization at its finest!
☆678Updated last year
Alternatives and similar repositories for nos
Users that are interested in nos are comparing it to the libraries listed below
Sorting:
- Controller for ModelMesh☆242Updated 6 months ago
- GPU environment and cluster management with LLM support☆656Updated last year
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,016Updated this week
- Distributed Model Serving Framework☆181Updated 2 months ago
- NVIDIA device plugin for Kubernetes☆49Updated last year
- Practical GPU Sharing Without Memory Size Constraints☆296Updated 8 months ago
- NVIDIA DRA Driver for GPUs☆515Updated this week
- User documentation for KServe.☆109Updated 2 weeks ago
- A curated list of awesome projects and resources related to Kubeflow (a CNCF incubating project)☆223Updated last month
- JobSet: a k8s native API for distributed ML training and HPC workloads☆289Updated last week
- deployKF builds machine learning platforms on Kubernetes. We combine the best of Kubeflow, Airflow†, and MLflow† into a complete platform…☆458Updated last year
- GPU plugin to the node feature discovery for Kubernetes☆308Updated last year
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆639Updated this week
- Run Slurm in Kubernetes☆335Updated this week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆501Updated this week
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆205Updated 2 years ago
- Module, Model, and Tensor Serialization/Deserialization☆279Updated 4 months ago
- MIG Partition Editor for NVIDIA GPUs☆233Updated this week
- A multi-cluster batch queuing system for high-throughput workloads on Kubernetes.☆562Updated this week
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.☆144Updated 3 years ago
- Kubeflow Deployment Manifests☆971Updated last week
- An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more☆864Updated this week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆157Updated this week
- Gateway API Inference Extension☆548Updated this week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆216Updated 8 months ago
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆119Updated last week
- CUDA checkpoint and restore utility☆397Updated 3 months ago
- K8s device plugin for GPU sharing☆99Updated 2 years ago
- NVIDIA GPU Operator creates, configures, and manages GPUs in Kubernetes☆2,453Updated this week
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆131Updated last week