microsoft / nnscalerLinks
nnScaler: Compiling DNN models for Parallel Training
☆118Updated last week
Alternatives and similar repositories for nnscaler
Users that are interested in nnscaler are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆177Updated 2 weeks ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆66Updated 6 months ago
- ☆298Updated last week
- ☆123Updated 10 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆146Updated 3 years ago
- ☆75Updated 4 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 10 months ago
- High performance Transformer implementation in C++.☆134Updated 8 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆130Updated 2 weeks ago
- ☆81Updated 4 months ago
- ☆72Updated last year
- ☆151Updated last year
- ☆83Updated 2 years ago
- Allow torch tensor memory to be released and resumed later☆142Updated last week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆221Updated 2 years ago
- ☆88Updated 3 years ago
- A resilient distributed training framework☆95Updated last year
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆273Updated 6 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆49Updated this week
- Sequence-level 1F1B schedule for LLMs.☆32Updated last month
- Dynamic Memory Management for Serving LLMs without PagedAttention☆421Updated 4 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆65Updated last year
- ☆121Updated 9 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆241Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆151Updated last month
- ☆98Updated last year
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated last year
- An experimental parallel training platform☆54Updated last year
- Zero Bubble Pipeline Parallelism☆427Updated 4 months ago
- Stateful LLM Serving☆85Updated 6 months ago