AlibabaPAI / torchaccLinks
PyTorch distributed training acceleration framework
☆51Updated 5 months ago
Alternatives and similar repositories for torchacc
Users that are interested in torchacc are comparing it to the libraries listed below
Sorting:
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆104Updated 2 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆61Updated last year
- A lightweight design for computation-communication overlap.☆155Updated last month
- ☆96Updated 11 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆94Updated 2 years ago
- ☆145Updated 5 months ago
- ☆92Updated 4 months ago
- ☆128Updated 7 months ago
- ☆209Updated last week
- ☆81Updated last week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆256Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆114Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆134Updated 3 weeks ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 8 months ago
- ☆149Updated 6 months ago
- Pipeline Parallelism Emulation and Visualization☆54Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆110Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆405Updated 2 months ago
- ☆102Updated 7 months ago
- A low-latency & high-throughput serving engine for LLMs☆400Updated 2 months ago
- Perplexity GPU Kernels☆418Updated 3 weeks ago
- ☆91Updated 2 months ago
- Fast and easy distributed model training examples.☆13Updated 8 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆185Updated this week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆144Updated 3 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 5 months ago
- High performance Transformer implementation in C++.☆129Updated 6 months ago
- Zero Bubble Pipeline Parallelism☆415Updated 3 months ago
- ☆139Updated last year
- ☆110Updated 8 months ago