alibaba / TePDistLinks
TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.
☆99Updated 2 years ago
Alternatives and similar repositories for TePDist
Users that are interested in TePDist are comparing it to the libraries listed below
Sorting:
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆186Updated 2 months ago
- GLake: optimizing GPU memory management and IO transmission.☆494Updated 9 months ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆159Updated last year
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆289Updated 4 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated 2 weeks ago
- ☆337Updated last week
- A lightweight design for computation-communication overlap.☆207Updated 2 weeks ago
- LLM training technologies developed by kwai☆69Updated this week
- ☆152Updated last year
- ☆47Updated last year
- Microsoft Collective Communication Library☆378Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 5 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆66Updated last year
- DeepSeek-V3/R1 inference performance simulator☆175Updated 9 months ago
- ☆104Updated last year
- ☆84Updated 3 years ago
- A model compilation solution for various hardware☆458Updated 4 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆449Updated this week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆331Updated 3 weeks ago
- ☆153Updated last year
- ☆130Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆123Updated 3 months ago
- FlagCX is a scalable and adaptive cross-chip communication library.☆138Updated last week
- A benchmark suited especially for deep learning operators☆42Updated 2 years ago
- GPU-scheduler-for-deep-learning☆210Updated 5 years ago
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago