nebuly-ai / nosLinks
Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas - Effortless optimization at its finest!
☆667Updated last year
Alternatives and similar repositories for nos
Users that are interested in nos are comparing it to the libraries listed below
Sorting:
- KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale☆756Updated this week
- NVIDIA device plugin for Kubernetes☆48Updated last year
- Controller for ModelMesh☆239Updated 2 months ago
- Distributed Model Serving Framework☆174Updated 2 months ago
- GPU environment and cluster management with LLM support☆630Updated last year
- NVIDIA DRA Driver for GPUs☆413Updated this week
- JobSet: a k8s native API for distributed ML training and HPC workloads☆250Updated last week
- LeaderWorkerSet: An API for deploying a group of pods as a unit of replication☆540Updated last week
- Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)☆491Updated last week
- User documentation for KServe.☆107Updated last week
- deployKF builds machine learning platforms on Kubernetes. We combine the best of Kubeflow, Airflow†, and MLflow† into a complete platform…☆446Updated last year
- Practical GPU Sharing Without Memory Size Constraints☆280Updated 4 months ago
- GPU plugin to the node feature discovery for Kubernetes☆303Updated last year
- A curated list of awesome projects and resources related to Kubeflow (a CNCF incubating project)☆213Updated 3 weeks ago
- Run Slurm in Kubernetes☆272Updated this week
- MIG Partition Editor for NVIDIA GPUs☆209Updated last week
- Gateway API Inference Extension☆440Updated this week
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆205Updated last year
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆141Updated this week
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆210Updated 4 months ago
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.☆143Updated 2 years ago
- Module, Model, and Tensor Serialization/Deserialization☆256Updated last week
- Holistic job manager on Kubernetes☆116Updated last year
- Kubeflow Deployment Manifests☆936Updated last week
- K8s device plugin for GPU sharing☆98Updated 2 years ago
- An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more☆837Updated this week
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆107Updated last week
- A kubernetes based framework for hassle free handling of datasets☆524Updated last month
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆484Updated 2 weeks ago
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆124Updated last week