volcengine / veTurboIOLinks
A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.
☆25Updated last year
Alternatives and similar repositories for veTurboIO
Users that are interested in veTurboIO are comparing it to the libraries listed below
Sorting:
- Offline optimization of your disaggregated Dynamo graph☆137Updated this week
- GLake: optimizing GPU memory management and IO transmission.☆494Updated 9 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated 2 weeks ago
- Automatic tuning for ML model deployment on Kubernetes☆81Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- GPU-scheduler-for-deep-learning☆210Updated 5 years ago
- ☆58Updated 5 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆250Updated 3 weeks ago
- Efficient and easy multi-instance LLM serving☆520Updated 4 months ago
- ☆337Updated this week
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- NVIDIA NCCL Tests for Distributed Training☆132Updated this week
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆352Updated last week
- A workload for deploying LLM inference services on Kubernetes☆153Updated 2 weeks ago
- ☆47Updated last year
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆83Updated 3 weeks ago
- KV cache store for distributed LLM inference☆378Updated last month
- NVIDIA Inference Xfer Library (NIXL)☆788Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- The DGL Operator makes it easy to run Deep Graph Library (DGL) graph neural network training on Kubernetes☆44Updated 4 years ago
- Fault-tolerant for DL frameworks☆70Updated 2 years ago
- Fast and memory-efficient exact attention☆107Updated 3 weeks ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- A lightweight design for computation-communication overlap.☆207Updated 2 weeks ago
- Kubernetes Operator for AI and Bigdata Elastic Training☆90Updated 11 months ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆133Updated last year
- RDMA and SHARP plugins for nccl library☆218Updated last month
- ☆72Updated 3 months ago