ashrafgt / k8s-gpu-hpa
Horizontal Pod Autoscaling for Kubernetes using Nvidia GPU Metrics
☆28Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for k8s-gpu-hpa
- Distributed Model Serving Framework☆154Updated 3 weeks ago
- Plugin for deploying MLflow models to TorchServe☆105Updated last year
- ☆30Updated 2 years ago
- Unified runtime-adapter image of the sidecar containers which run in the modelmesh pods☆21Updated last month
- Getting Started with the CoreWeave Kubernetes GPU Cloud☆68Updated last week
- NVIDIA device plugin for Kubernetes☆46Updated 8 months ago
- A top-like tool for monitoring GPUs in a cluster☆80Updated 8 months ago
- Argoflow has been superseded by deployKF☆137Updated last year
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆183Updated 2 months ago
- markdown docs☆68Updated this week
- Controller for ModelMesh☆204Updated 3 months ago
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.☆135Updated last year
- elastic-gpu-agent is a Kubernetes device plugin for GPU resources allocation on node.☆54Updated 2 years ago
- JobSet: a k8s native API for distributed ML training and HPC workloads☆144Updated this week
- Experiments with Model Training, Deployment & Monitoring☆36Updated 8 months ago
- ☆24Updated this week
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆74Updated 2 weeks ago
- Repository for open inference protocol specification☆42Updated 3 months ago
- ☆51Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆187Updated 3 weeks ago
- Simple dependency injection framework for Python☆20Updated 5 months ago
- Fork of NVIDIA device plugin for Kubernetes with support for shared GPUs by declaring GPUs multiple times☆88Updated 2 years ago
- GPU plugin to the node feature discovery for Kubernetes☆291Updated 5 months ago
- AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads☆202Updated 11 months ago
- The Triton backend for the PyTorch TorchScript models.☆123Updated this week
- K3ai-core is the core library for the GO installer. Go installer will replace the current bash installer☆23Updated 3 years ago
- Backend server for envd☆21Updated 10 months ago
- The Triton backend for TensorRT.☆62Updated this week
- User documentation for KServe.☆105Updated this week
- Unofficial golang package for the Triton Inference Server(https://github.com/triton-inference-server/server)☆43Updated this week