AlibabaPAI / torchaccLinks
PyTorch distributed training acceleration framework
☆55Updated 5 months ago
Alternatives and similar repositories for torchacc
Users that are interested in torchacc are comparing it to the libraries listed below
Sorting:
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- LLM training technologies developed by kwai☆70Updated 3 weeks ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- ☆105Updated last year
- ☆155Updated 11 months ago
- ☆342Updated 2 weeks ago
- A lightweight design for computation-communication overlap.☆219Updated 3 weeks ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆298Updated 3 weeks ago
- ☆152Updated last year
- ☆159Updated last year
- ☆96Updated 10 months ago
- Pipeline Parallelism Emulation and Visualization☆77Updated last month
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458Updated 8 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆161Updated 4 months ago
- ☆130Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- High performance Transformer implementation in C++.☆151Updated last year
- Fast and easy distributed model training examples.☆12Updated last year
- ☆114Updated 8 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- ☆175Updated 9 months ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆90Updated last month
- ☆141Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- Fast and memory-efficient exact attention☆114Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- High Performance LLM Inference Operator Library☆695Updated last week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆92Updated 3 weeks ago