nebuly-ai / nosLinks
Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas - Effortless optimization at its finest!
☆680Updated last year
Alternatives and similar repositories for nos
Users that are interested in nos are comparing it to the libraries listed below
Sorting:
- GPU environment and cluster management with LLM support☆657Updated last year
- Controller for ModelMesh☆242Updated 7 months ago
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆1,072Updated last week
- Distributed Model Serving Framework☆182Updated 3 months ago
- NVIDIA device plugin for Kubernetes☆49Updated last year
- NVIDIA DRA Driver for GPUs☆542Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆299Updated this week
- User documentation for KServe.☆109Updated last month
- deployKF builds machine learning platforms on Kubernetes. We combine the best of Kubeflow, Airflow†, and MLflow† into a complete platform…☆463Updated last year
- Practical GPU Sharing Without Memory Size Constraints☆297Updated 9 months ago
- GPU plugin to the node feature discovery for Kubernetes☆308Updated last year
- Module, Model, and Tensor Serialization/Deserialization☆285Updated 5 months ago
- Run Slurm in Kubernetes☆343Updated last week
- A curated list of awesome projects and resources related to Kubeflow (a CNCF incubating project)☆223Updated 3 weeks ago
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆652Updated this week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆506Updated last week
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆204Updated 2 years ago
- Gateway API Inference Extension☆567Updated this week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆502Updated last week
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.☆145Updated 3 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆214Updated 9 months ago
- Kubeflow Deployment Manifests☆983Updated last week
- A lightweight tool to get an AI Infrastructure Stack up in minutes not days. K3ai will take care of setup K8s for You, deploy the AI tool…☆125Updated 3 years ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆411Updated last week
- ☆274Updated last week
- An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more☆870Updated this week
- CUDA checkpoint and restore utility☆403Updated 4 months ago
- A toolkit to run Ray applications on Kubernetes☆2,275Updated this week
- MIG Partition Editor for NVIDIA GPUs☆235Updated this week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆162Updated this week