nebuly-ai / nos
Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas - Effortless optimization at its finest!
☆629Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for nos
- Controller for ModelMesh☆204Updated 3 months ago
- NVIDIA device plugin for Kubernetes☆46Updated 8 months ago
- Dynamic Resource Allocation (DRA) for NVIDIA GPUs in Kubernetes☆259Updated this week
- GPU environment and cluster management with LLM support☆490Updated 5 months ago
- A curated list of awesome projects and resources related to Kubeflow (a CNCF incubating project)☆194Updated 3 months ago
- Distributed Model Serving Framework☆154Updated last month
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆202Updated 11 months ago
- User documentation for KServe.☆105Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆144Updated this week
- Practical GPU Sharing Without Memory Size Constraints☆224Updated last month
- GPU Sharing Scheduler for Kubernetes Cluster☆1,409Updated 10 months ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆183Updated 2 months ago
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆440Updated 3 weeks ago
- GPU plugin to the node feature discovery for Kubernetes☆291Updated 5 months ago
- An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more☆717Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆739Updated last week
- A repository for Kustomize manifests☆818Updated last week
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.☆135Updated last year
- Repository for open inference protocol specification☆42Updated 3 months ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆332Updated 3 weeks ago
- Kubernetes-native Job Queueing☆1,389Updated this week
- GPU Sharing Device Plugin for Kubernetes Cluster☆470Updated last year
- A lightweight tool to get an AI Infrastructure Stack up in minutes not days. K3ai will take care of setup K8s for You, deploy the AI tool…☆123Updated 2 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆426Updated this week
- deployKF builds machine learning platforms on Kubernetes. We combine the best of Kubeflow, Airflow†, and MLflow† into a complete platform…☆374Updated 3 months ago
- Docker for Your ML/DL Models Based on OCI Artifacts☆461Updated 9 months ago
- Run your deep learning workloads on Kubernetes more easily and efficiently.☆506Updated 8 months ago
- MIG Partition Editor for NVIDIA GPUs☆173Updated this week
- Share GPU between Pods in Kubernetes☆201Updated last year
- NVIDIA GPU metrics exporter for Prometheus leveraging DCGM☆913Updated last week