volcengine / ml-platform-sdk-pythonLinks
☆31Updated 2 years ago
Alternatives and similar repositories for ml-platform-sdk-python
Users that are interested in ml-platform-sdk-python are comparing it to the libraries listed below
Sorting:
- Kubernetes Operator for AI and Bigdata Elastic Training☆86Updated 5 months ago
- Fault-tolerant for DL frameworks☆70Updated last year
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆125Updated 3 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- Elastic Deep Learning Training based on Kubernetes by Leveraging EDL and Volcano☆32Updated 2 years ago
- ☆58Updated 4 years ago
- Automatic tuning for ML model deployment on Kubernetes☆80Updated 7 months ago
- ☆219Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆94Updated 2 years ago
- ☆276Updated last year
- GPU-scheduler-for-deep-learning☆206Updated 4 years ago
- Tools for monitoring NVIDIA GPUs on Linux☆9Updated 5 years ago
- Common APIs and libraries shared by other Kubeflow operator repositories.☆52Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆285Updated 3 years ago
- Run your deep learning workloads on Kubernetes more easily and efficiently.☆523Updated last year
- Cloud Native ML/DL Platform☆132Updated 4 years ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆158Updated last year
- ☆122Updated 4 months ago
- Elastic Deep Learning for deep learning framework on Kubernetes☆173Updated last year
- Cloud Native Machine Learning Model Registry☆81Updated 2 years ago
- A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.☆19Updated 5 months ago
- Kubernetes Scheduler for Deep Learning☆262Updated 3 years ago
- A CLI for Kubeflow.☆60Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆87Updated last month
- NVIDIA NCCL Tests for Distributed Training☆97Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆467Updated 2 months ago
- ☆532Updated last year
- A Kubernetes operator for mxnet jobs☆53Updated 3 years ago
- ☆133Updated 4 years ago
- PyTorch distributed training acceleration framework☆49Updated 4 months ago