volcengine / veTurboIOLinks
A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.
☆19Updated 4 months ago
Alternatives and similar repositories for veTurboIO
Users that are interested in veTurboIO are comparing it to the libraries listed below
Sorting:
- Automatic tuning for ML model deployment on Kubernetes☆80Updated 6 months ago
- NVIDIA NCCL Tests for Distributed Training☆91Updated last week
- GPU-scheduler-for-deep-learning☆205Updated 4 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- ☆58Updated 4 years ago
- ☆36Updated 5 months ago
- ☆49Updated 2 months ago
- Fine-grained GPU sharing primitives☆141Updated 5 years ago
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆125Updated 3 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆80Updated last week
- PyTorch distributed training acceleration framework☆49Updated 3 months ago
- Kubernetes Operator for AI and Bigdata Elastic Training☆85Updated 4 months ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- KV cache store for distributed LLM inference☆250Updated this week
- ☆82Updated 2 years ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆121Updated last year
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆156Updated last year
- RDMA and SHARP plugins for nccl library☆193Updated last month
- PipeSwitch: Fast Pipelined Context Switching for Deep Learning Applications☆126Updated 3 years ago
- Fault-tolerant for DL frameworks☆70Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆94Updated 2 years ago
- Forked form☆11Updated 4 years ago
- Efficient and easy multi-instance LLM serving☆420Updated this week
- Intelligent platform for AI workloads☆37Updated 2 years ago
- Stateful LLM Serving☆70Updated 2 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- ☆25Updated 2 months ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated last year
- NCCL Profiling Kit☆134Updated 10 months ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆61Updated 11 months ago