AlibabaPAI / FlashModelsLinks
Fast and easy distributed model training examples.
☆12Updated last year
Alternatives and similar repositories for FlashModels
Users that are interested in FlashModels are comparing it to the libraries listed below
Sorting:
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- ☆153Updated last year
- ☆165Updated 7 months ago
- LLM training technologies developed by kwai☆67Updated last month
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- A Easy-to-understand TensorOp Matmul Tutorial☆400Updated 2 months ago
- ☆152Updated 11 months ago
- Zero Bubble Pipeline Parallelism☆443Updated 7 months ago
- ☆112Updated 7 months ago
- ☆158Updated last month
- ☆254Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆263Updated 6 months ago
- ☆141Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆453Updated 7 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆186Updated 2 months ago
- Allow torch tensor memory to be released and resumed later☆195Updated last month
- A lightweight design for computation-communication overlap.☆206Updated last week
- ☆104Updated last year
- Shared Middle-Layer for Triton Compilation☆321Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆177Updated 2 weeks ago
- ☆336Updated last week
- Perplexity GPU Kernels☆547Updated last month
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- ☆77Updated last year
- ☆84Updated 3 years ago
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 3 months ago
- ☆32Updated 2 years ago
- Pipeline Parallelism Emulation and Visualization☆74Updated 6 months ago
- An Optimizing Compiler for Recommendation Model Inference☆26Updated 6 months ago