Toolchain built around the Megatron-LM for Distributed Training
☆90Mar 5, 2026Updated 2 weeks ago
Alternatives and similar repositories for MegatronApp
Users that are interested in MegatronApp are comparing it to the libraries listed below
Sorting:
- Paper reading and discussion notes, covering AI frameworks, distributed systems, cluster management, etc.☆57Mar 4, 2026Updated 2 weeks ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆95Jan 16, 2026Updated 2 months ago
- Tiny-Megatron, a minimalistic re-implementation of the Megatron library☆23Sep 1, 2025Updated 6 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆25Sep 23, 2025Updated 5 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆91Jan 26, 2026Updated last month
- [EuroSys'25] Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization☆21Feb 5, 2026Updated last month
- ☆38Aug 7, 2025Updated 7 months ago
- ☆50Sep 26, 2025Updated 5 months ago
- Allow torch tensor memory to be released and resumed later☆225Mar 10, 2026Updated last week
- Pipeline Parallelism Emulation and Visualization☆81Jan 8, 2026Updated 2 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆676Updated this week
- ☆25Mar 9, 2026Updated last week
- Tutorials for NVIDIA CUPTI samples☆59Nov 3, 2025Updated 4 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆59Oct 27, 2025Updated 4 months ago
- ☆11Apr 5, 2021Updated 4 years ago
- Official implementation of TBA for async LLM post-training.☆29Nov 5, 2025Updated 4 months ago
- ☆44Sep 8, 2025Updated 6 months ago
- To pioneer training long-context multi-modal transformer models☆71Aug 8, 2025Updated 7 months ago
- Compiler-R1: Towards Agentic Compiler Auto-tuning with Reinforcement Learning☆28Jul 14, 2025Updated 8 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- CS169.1x Software as a Service course offered by UC Berkeley at edx.org☆14Oct 28, 2014Updated 11 years ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆271Updated this week
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆176Updated this week
- Codes for MO's Trading☆15Mar 20, 2022Updated 4 years ago
- Artifact from "Hardware Compute Partitioning on NVIDIA GPUs". THIS IS A FORK OF BAKITAS REPO. I AM NOT ONE OF THE AUTHORS OF THE PAPER.☆56Nov 24, 2025Updated 3 months ago
- Based on research papers which discuss Fuzzy concept☆12Jul 19, 2019Updated 6 years ago
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆201Mar 13, 2026Updated last week
- LLM training technologies developed by kwai☆71Jan 21, 2026Updated 2 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,725Mar 14, 2026Updated last week
- [INACTIVE] A real-time, collaborative, HTML5 drawing widget powered by KineticJS / FabricJS and inspired by Literally Canvas.☆10Feb 9, 2014Updated 12 years ago
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆509Updated this week
- Spectral Sphere Optimizer☆111Jan 14, 2026Updated 2 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- GPT-jax based on the official huggingface library☆13Jun 22, 2021Updated 4 years ago
- Implementation of the SOTA Transformer architecture from PaLM - Scaling Language Modeling with Pathways in JAX/Flax☆14Jun 22, 2022Updated 3 years ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- ☆52Apr 30, 2025Updated 10 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year