AlibabaPAI / FlashModels
Fast and easy distributed model training examples.
☆11Updated 3 months ago
Alternatives and similar repositories for FlashModels:
Users that are interested in FlashModels are comparing it to the libraries listed below
- PyTorch distributed training acceleration framework☆43Updated 2 weeks ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆91Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆142Updated 2 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆52Updated 7 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆139Updated this week
- ☆142Updated last month
- A fast communication-overlapping library for tensor parallelism on GPUs.☆319Updated this week
- ☆139Updated 10 months ago
- ☆75Updated 2 years ago
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 10 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆296Updated last week
- nnScaler: Compiling DNN models for Parallel Training☆97Updated 2 weeks ago
- ☆101Updated 2 months ago
- Development repository for the Triton-Linalg conversion☆176Updated 3 weeks ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆60Updated 8 months ago
- Yinghan's Code Sample☆311Updated 2 years ago
- A home for the final text of all TVM RFCs.☆102Updated 5 months ago
- ☆127Updated 2 months ago
- Examples of CUDA implementations by Cutlass CuTe☆141Updated last month
- ☆68Updated 3 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆322Updated 5 months ago
- ☆130Updated 2 months ago
- Microsoft Collective Communication Library☆339Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆302Updated this week
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆117Updated 2 years ago
- ☆81Updated 5 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆74Updated 4 years ago
- code reading for tvm☆74Updated 3 years ago
- A low-latency & high-throughput serving engine for LLMs☆316Updated last month
- An Optimizing Compiler for Recommendation Model Inference☆22Updated last year