volcengine / veTurboIO
A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.
☆15Updated 2 months ago
Alternatives and similar repositories for veTurboIO:
Users that are interested in veTurboIO are comparing it to the libraries listed below
- Automatic tuning for ML model deployment on Kubernetes☆81Updated 4 months ago
- NVIDIA NCCL Tests for Distributed Training☆85Updated last week
- ☆58Updated 4 years ago
- GPU-scheduler-for-deep-learning☆203Updated 4 years ago
- Elastic Deep Learning Training based on Kubernetes by Leveraging EDL and Volcano☆32Updated last year
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆124Updated 3 years ago
- Efficient and easy multi-instance LLM serving☆339Updated this week
- Intelligent platform for AI workloads☆37Updated 2 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- Kubernetes Operator for AI and Bigdata Elastic Training☆85Updated 2 months ago
- PyTorch distributed training acceleration framework☆44Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated last year
- Common APIs and libraries shared by other Kubeflow operator repositories.☆52Updated last year
- Forked form☆10Updated 4 years ago
- ☆36Updated 3 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 9 months ago
- ☆237Updated this week
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆150Updated last year
- Kubernetes Rdma SRIOV device plugin☆110Updated 4 years ago
- RDMA and SHARP plugins for nccl library☆183Updated 2 months ago
- A low-latency & high-throughput serving engine for LLMs☆325Updated last month
- KV cache store for distributed LLM inference☆78Updated this week
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated last year
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆127Updated 2 years ago
- ☆131Updated 3 years ago
- Fault-tolerant for DL frameworks☆69Updated last year
- GLake: optimizing GPU memory management and IO transmission.☆445Updated 3 months ago