kubedl-io / morphlingLinks
Automatic tuning for ML model deployment on Kubernetes
☆80Updated 7 months ago
Alternatives and similar repositories for morphling
Users that are interested in morphling are comparing it to the libraries listed below
Sorting:
- Kubernetes Operator for AI and Bigdata Elastic Training☆86Updated 5 months ago
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆125Updated 3 years ago
- GPU-scheduler-for-deep-learning☆206Updated 4 years ago
- Fault-tolerant for DL frameworks☆70Updated last year
- Common APIs and libraries shared by other Kubeflow operator repositories.☆52Updated 2 years ago
- Kubernetes Scheduler for Deep Learning☆262Updated 3 years ago
- Device plugins for Volcano, e.g. GPU☆123Updated 3 months ago
- GPU scheduler for elastic/distributed deep learning workloads in Kubernetes cluster (IC2E'23)☆34Updated last year
- ☆133Updated 4 years ago
- ☆117Updated 2 years ago
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- An efficient GPU resource sharing system with fine-grained control for Linux platforms.☆83Updated last year
- Run your deep learning workloads on Kubernetes more easily and efficiently.☆523Updated last year
- Forked form☆11Updated 4 years ago
- elastic-gpu-scheduler is a Kubernetes scheduler extender for GPU resources scheduling.☆141Updated 2 years ago
- HAMi-core compiles libvgpu.so, which ensures hard limit on GPU in container☆171Updated this week
- Kubernetes Rdma SRIOV device plugin☆111Updated 4 years ago
- RDMA device plugin for Kubernetes☆215Updated last year
- Share GPU between Pods in Kubernetes☆209Updated 2 years ago
- Elastic Deep Learning Training based on Kubernetes by Leveraging EDL and Volcano☆32Updated 2 years ago
- Yoda is a kubernetes scheduler based on GPU metrics. Yoda是一个基于GPU参数指标的 Kubernetes 调度器☆139Updated 3 years ago
- Code for "Heterogenity-Aware Cluster Scheduling Policies for Deep Learning Workloads", which appeared at OSDI 2020☆128Updated 10 months ago
- ☆267Updated last week
- NVIDIA NCCL Tests for Distributed Training☆93Updated last week
- A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.☆19Updated 5 months ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆158Updated last year
- ☆58Updated 4 years ago
- Artifacts for our NSDI'23 paper TGS☆76Updated last year
- An Efficient Dynamic Resource Scheduler for Deep Learning Clusters☆42Updated 7 years ago
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago