microsoft / nnscalerLinks
nnScaler: Compiling DNN models for Parallel Training
☆118Updated last month
Alternatives and similar repositories for nnscaler
Users that are interested in nnscaler are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆183Updated last month
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆67Updated 7 months ago
- ☆77Updated 4 years ago
- ☆312Updated this week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆222Updated 2 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 11 months ago
- ☆124Updated last year
- Sequence-level 1F1B schedule for LLMs.☆32Updated 2 months ago
- ☆146Updated 10 months ago
- ☆80Updated 5 months ago
- Allow torch tensor memory to be released and resumed later☆164Updated last week
- ☆75Updated 3 weeks ago
- ☆87Updated 3 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆66Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆142Updated last month
- ☆158Updated last year
- ☆101Updated last year
- ☆83Updated 2 years ago
- High performance Transformer implementation in C++.☆140Updated 9 months ago
- Zero Bubble Pipeline Parallelism☆433Updated 6 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆278Updated 8 months ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆31Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆434Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆126Updated 5 months ago
- ☆149Updated 8 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆257Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆164Updated last month
- An experimental parallel training platform☆56Updated last year