volcengine / veTurboIOLinks
A library developed by Volcano Engine for high-performance reading and writing of PyTorch model files.
☆25Updated last year
Alternatives and similar repositories for veTurboIO
Users that are interested in veTurboIO are comparing it to the libraries listed below
Sorting:
- Automatic tuning for ML model deployment on Kubernetes☆81Updated last year
- Offline optimization of your disaggregated Dynamo graph☆168Updated this week
- ☆58Updated 5 years ago
- GPU-scheduler-for-deep-learning☆210Updated 5 years ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆122Updated 2 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- A workload for deploying LLM inference services on Kubernetes☆160Updated last week
- NVIDIA NCCL Tests for Distributed Training☆133Updated this week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆260Updated this week
- Efficient and easy multi-instance LLM serving☆523Updated 4 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆91Updated 2 weeks ago
- Open Model Engine (OME) — Kubernetes operator for LLM serving, GPU scheduling, and model lifecycle management. Works with SGLang, vLLM, T…☆365Updated this week
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- GLake: optimizing GPU memory management and IO transmission.☆497Updated 10 months ago
- ☆340Updated 3 weeks ago
- KV cache store for distributed LLM inference☆389Updated 2 months ago
- Stateful LLM Serving☆95Updated 10 months ago
- ☆47Updated last year
- Kubernetes Scheduler for Deep Learning☆262Updated 3 years ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆87Updated last month
- Kubernetes Operator for AI and Bigdata Elastic Training☆90Updated last year
- Fast and memory-efficient exact attention☆110Updated last week
- A low-latency & high-throughput serving engine for LLMs☆470Updated 3 weeks ago
- Toolchain built around the Megatron-LM for Distributed Training☆84Updated last month
- RDMA and SHARP plugins for nccl library☆221Updated 2 weeks ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆253Updated last week
- Fault-tolerant for DL frameworks☆70Updated 2 years ago