Azure / MS-AMP-ExamplesLinks
Examples for MS-AMP package.
☆29Updated last year
Alternatives and similar repositories for MS-AMP-Examples
Users that are interested in MS-AMP-Examples are comparing it to the libraries listed below
Sorting:
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆210Updated 9 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆93Updated last week
- Best practices for testing advanced Mixtral, DeepSeek, and Qwen series MoE models using Megatron Core MoE.☆17Updated this week
- ☆74Updated 4 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 5 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆130Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆217Updated 6 months ago
- ☆105Updated 9 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆76Updated 9 months ago
- extensible collectives library in triton☆87Updated 2 months ago
- ☆84Updated 3 years ago
- Vocabulary Parallelism☆19Updated 2 months ago
- ☆119Updated last year
- ☆80Updated 7 months ago
- Sequence-level 1F1B schedule for LLMs.☆17Updated last year
- Pipeline Parallelism Emulation and Visualization☆40Updated 2 weeks ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- ☆157Updated last year
- 16-fold memory access reduction with nearly no loss☆96Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 6 months ago
- ☆252Updated last year
- ☆27Updated 3 years ago
- Zero Bubble Pipeline Parallelism☆396Updated last month
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆205Updated 2 weeks ago
- ☆93Updated last week
- ☆96Updated 8 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆273Updated last year
- ☆85Updated 2 months ago