AlibabaPAI / torchacc
PyTorch distributed training acceleration framework
☆39Updated last week
Alternatives and similar repositories for torchacc:
Users that are interested in torchacc are comparing it to the libraries listed below
- Fast and easy distributed model training examples.☆11Updated 2 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆90Updated last year
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 6 months ago
- ☆142Updated last month
- ☆81Updated 5 months ago
- ☆140Updated 9 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 3 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆138Updated last week
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆222Updated last week
- A fast communication-overlapping library for tensor parallelism on GPUs.☆295Updated 3 months ago
- ☆127Updated last month
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 8 months ago
- nnScaler: Compiling DNN models for Parallel Training☆91Updated this week
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 9 months ago
- ☆43Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆93Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆94Updated last month
- ☆101Updated 6 months ago
- ☆43Updated this week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆142Updated 2 years ago
- ☆36Updated 2 months ago
- ☆75Updated 2 years ago
- ☆83Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆290Updated last week
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆17Updated this week
- High performance Transformer implementation in C++.☆102Updated last month
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆50Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆117Updated 2 years ago
- Fast and memory-efficient exact attention☆44Updated this week
- ☆72Updated 3 years ago