Training library for Megatron-based models with bidirectional Hugging Face conversion capability
☆481Mar 5, 2026Updated this week
Alternatives and similar repositories for Megatron-Bridge
Users that are interested in Megatron-Bridge are comparing it to the libraries listed below
Sorting:
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆340Updated this week
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆197Updated this week
- Megatron's multi-modal data loader☆322Feb 26, 2026Updated last week
- Scalable toolkit for efficient model reinforcement☆1,372Updated this week
- (best/better) practices of megatron on veRL and tuning guide☆131Sep 26, 2025Updated 5 months ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆167Jan 22, 2026Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆185Feb 19, 2026Updated 2 weeks ago
- slime is an LLM post-training framework for RL Scaling.☆4,536Updated this week
- A flexible and efficient training framework for large-scale alignment tasks☆451Oct 23, 2025Updated 4 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,676Updated this week
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆293Nov 7, 2025Updated 4 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,656Updated this week
- Async pipelined version of Verl☆124Apr 8, 2025Updated 10 months ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,534Dec 15, 2025Updated 2 months ago
- Ongoing research training transformer models at scale☆15,461Updated this week
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆659Updated this week
- Toolchain built around the Megatron-LM for Distributed Training☆89Dec 7, 2025Updated 3 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆262Updated this week
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 10 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Feb 28, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,919Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,230Aug 14, 2025Updated 6 months ago
- A PyTorch native platform for training generative AI models☆5,098Feb 28, 2026Updated last week
- A Sober Look at Language Model Reasoning☆93Nov 18, 2025Updated 3 months ago
- ☆1,104Jan 10, 2026Updated last month
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆533Feb 26, 2026Updated last week
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated last month
- ☆38Aug 7, 2025Updated 6 months ago
- Pipeline Parallelism Emulation and Visualization☆79Jan 8, 2026Updated last month
- ☆48Jan 20, 2026Updated last month
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆969Feb 5, 2026Updated last month
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆3,586Updated this week
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated 2 weeks ago
- ☆14Apr 14, 2025Updated 10 months ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week