alibaba / TePDistLinks
TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.
☆97Updated 2 years ago
Alternatives and similar repositories for TePDist
Users that are interested in TePDist are comparing it to the libraries listed below
Sorting:
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆175Updated last week
- PyTorch distributed training acceleration framework☆53Updated 2 months ago
- GLake: optimizing GPU memory management and IO transmission.☆487Updated 7 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆270Updated 2 years ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆267Updated 2 months ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆159Updated last year
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆114Updated 5 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 11 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆66Updated last year
- ☆152Updated 10 months ago
- A lightweight design for computation-communication overlap.☆183Updated last month
- ☆129Updated 10 months ago
- ☆101Updated last year
- ☆32Updated 2 years ago
- ☆146Updated 10 months ago
- Microsoft Collective Communication Library☆371Updated 2 years ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆76Updated 4 years ago
- ☆312Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆62Updated last year
- An Optimizing Compiler for Recommendation Model Inference☆26Updated 5 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆433Updated this week
- DeepSeek-V3/R1 inference performance simulator☆168Updated 7 months ago
- ☆83Updated 2 years ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆142Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆434Updated 5 months ago
- ☆193Updated 2 years ago
- Efficient and easy multi-instance LLM serving☆506Updated 2 months ago
- ☆90Updated 7 months ago