Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.
☆167Jan 22, 2026Updated last month
Alternatives and similar repositories for Megatron-MoE-ModelZoo
Users that are interested in Megatron-MoE-ModelZoo are comparing it to the libraries listed below
Sorting:
- Pipeline Parallelism Emulation and Visualization☆79Jan 8, 2026Updated last month
- ☆42Sep 8, 2025Updated 5 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated last year
- Cute layout visualization☆30Jan 18, 2026Updated last month
- ☆159Updated this week
- Ongoing research training transformer models at scale☆18Updated this week
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆197Updated this week
- Megatron's multi-modal data loader☆322Feb 26, 2026Updated last week
- ☆32Jul 2, 2025Updated 8 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆185Feb 19, 2026Updated 2 weeks ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,534Dec 15, 2025Updated 2 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 3 months ago
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆481Updated this week
- ☆52May 19, 2025Updated 9 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated last month
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- ☆87Feb 27, 2026Updated last week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated last month
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆90Sep 11, 2025Updated 5 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆264Feb 27, 2026Updated last week
- Allow torch tensor memory to be released and resumed later☆220Feb 9, 2026Updated 3 weeks ago
- Perplexity GPU Kernels☆567Nov 7, 2025Updated 4 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆165Feb 11, 2026Updated 3 weeks ago
- ☆347Jan 28, 2026Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Feb 28, 2026Updated last week
- A library to analyze PyTorch traces.☆472Feb 4, 2026Updated last month
- Toolchain built around the Megatron-LM for Distributed Training☆89Dec 7, 2025Updated 2 months ago
- Ring attention implementation with flash attention☆987Sep 10, 2025Updated 5 months ago
- LLM training technologies developed by kwai☆70Jan 21, 2026Updated last month
- Fast and memory-efficient exact attention☆16Updated this week
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆47Jun 19, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- ☆13May 8, 2023Updated 2 years ago
- ☆24May 9, 2025Updated 9 months ago
- ☆15Feb 24, 2026Updated last week