volcengine / veTurboIOLinks
A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.
☆23Updated 9 months ago
Alternatives and similar repositories for veTurboIO
Users that are interested in veTurboIO are comparing it to the libraries listed below
Sorting:
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆268Updated 2 years ago
- Efficient and easy multi-instance LLM serving☆493Updated last month
- NVIDIA NCCL Tests for Distributed Training☆112Updated last week
- Automatic tuning for ML model deployment on Kubernetes☆81Updated 11 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆120Updated last year
- GLake: optimizing GPU memory management and IO transmission.☆479Updated 6 months ago
- NVIDIA Inference Xfer Library (NIXL)☆654Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆111Updated 4 months ago
- OME is a Kubernetes operator for enterprise-grade management and serving of Large Language Models (LLMs)☆286Updated this week
- ☆58Updated 5 years ago
- KV cache store for distributed LLM inference☆338Updated last month
- GPU-scheduler-for-deep-learning☆210Updated 4 years ago
- ☆300Updated last week
- PyTorch distributed training acceleration framework☆52Updated last month
- A workload for deploying LLM inference services on Kubernetes☆75Updated 2 weeks ago
- Offline optimization of your disaggregated Dynamo graph☆72Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆215Updated last week
- ☆46Updated 9 months ago
- A low-latency & high-throughput serving engine for LLMs☆424Updated 4 months ago
- RDMA and SHARP plugins for nccl library☆208Updated last month
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆58Updated 2 weeks ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆266Updated last month
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆96Updated 2 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆115Updated last year
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆127Updated 3 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆167Updated last year
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆129Updated last year
- ☆219Updated 2 years ago
- Fine-grained GPU sharing primitives☆144Updated 2 months ago