OpenSQZ / MegatronAppLinks
Toolchain built around the Megatron-LM for Distributed Training
☆84Updated 2 months ago
Alternatives and similar repositories for MegatronApp
Users that are interested in MegatronApp are comparing it to the libraries listed below
Sorting:
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆268Updated 2 months ago
- Allow torch tensor memory to be released and resumed later☆216Updated 3 weeks ago
- A NCCL extension library, designed to efficiently offload GPU memory allocated by the NCCL communication library.☆90Updated last month
- A simple calculation for LLM MFU.☆66Updated 4 months ago
- ☆96Updated 10 months ago
- ☆342Updated last week
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆83Updated 4 months ago
- Pipeline Parallelism Emulation and Visualization☆77Updated 3 weeks ago
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆191Updated last week
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆263Updated this week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆92Updated 3 weeks ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆87Updated last week
- ☆73Updated 4 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Updated 3 weeks ago
- torchcomms: a modern PyTorch communications API☆327Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆184Updated last month
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- ☆155Updated 11 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆92Updated last week
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆419Updated this week
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆161Updated 2 weeks ago
- High-performance distributed data shuffling (all-to-all) library for MoE training and inference☆111Updated last month
- A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from trainin…☆131Updated last month
- Accelerating MoE with IO and Tile-aware Optimizations☆569Updated 2 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- Autonomous GPU Kernel Generation via Deep Agents☆228Updated this week
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆123Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆384Updated last week