yanring / Megatron-MoE-ModelZooView external linksLinks
Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.
☆161Jan 22, 2026Updated 3 weeks ago
Alternatives and similar repositories for Megatron-MoE-ModelZoo
Users that are interested in Megatron-MoE-ModelZoo are comparing it to the libraries listed below
Sorting:
- Pipeline Parallelism Emulation and Visualization☆77Jan 8, 2026Updated last month
- ☆42Sep 8, 2025Updated 5 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- [WIP] Better (FP8) attention for Hopper☆32Feb 24, 2025Updated 11 months ago
- Cute layout visualization☆30Jan 18, 2026Updated 3 weeks ago
- ☆151Updated this week
- Ongoing research training transformer models at scale☆18Feb 5, 2026Updated last week
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆193Updated this week
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆43Nov 19, 2025Updated 2 months ago
- ☆32Jul 2, 2025Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆184Dec 16, 2025Updated last month
- ☆52May 19, 2025Updated 8 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated 3 weeks ago
- Megatron's multi-modal data loader☆316Feb 6, 2026Updated last week
- Allow torch tensor memory to be released and resumed later☆216Jan 13, 2026Updated last month
- ☆84Feb 6, 2026Updated last week
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆262Feb 7, 2026Updated last week
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,527Dec 15, 2025Updated last month
- Perplexity GPU Kernels☆560Nov 7, 2025Updated 3 months ago
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆427Updated this week
- ☆65Apr 26, 2025Updated 9 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Updated this week
- ☆342Jan 28, 2026Updated 2 weeks ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Feb 7, 2026Updated last week
- Toolchain built around the Megatron-LM for Distributed Training☆86Dec 7, 2025Updated 2 months ago
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- LLM training technologies developed by kwai☆70Jan 21, 2026Updated 3 weeks ago
- ☆11Dec 12, 2021Updated 4 years ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆49Jun 19, 2024Updated last year
- Fast and memory-efficient exact attention☆15Feb 3, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- Estimate MFU for DeepSeekV3☆26Jan 5, 2025Updated last year
- A library to analyze PyTorch traces.☆464Feb 4, 2026Updated last week
- ☆15Oct 30, 2025Updated 3 months ago
- ☆13May 8, 2023Updated 2 years ago
- ☆24May 9, 2025Updated 9 months ago
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆317Jan 5, 2026Updated last month
- Tutorials for NVIDIA CUPTI samples☆52Nov 3, 2025Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆143May 29, 2025Updated 8 months ago