AlibabaPAI / torchaccLinks
PyTorch distributed training acceleration framework
☆55Updated 5 months ago
Alternatives and similar repositories for torchacc
Users that are interested in torchacc are comparing it to the libraries listed below
Sorting:
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- LLM training technologies developed by kwai☆70Updated 3 weeks ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- A lightweight design for computation-communication overlap.☆219Updated 3 weeks ago
- ☆342Updated 2 weeks ago
- ☆105Updated last year
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- ☆155Updated 11 months ago
- FlagCX is a scalable and adaptive cross-chip communication library.☆172Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- High performance Transformer implementation in C++.☆150Updated last year
- ☆159Updated last year
- ☆152Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆161Updated 4 months ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆298Updated 3 weeks ago
- ☆96Updated 10 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆458Updated 8 months ago
- Fast and memory-efficient exact attention☆114Updated this week
- ☆113Updated 8 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆92Updated 3 weeks ago
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆120Updated last year
- ☆130Updated last year
- Pipeline Parallelism Emulation and Visualization☆77Updated last month
- ☆131Updated last year
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆90Updated last month
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Updated 10 months ago
- A low-latency & high-throughput serving engine for LLMs☆470Updated last month
- Fast and easy distributed model training examples.☆12Updated last year