kwai / Megatron-KwaiView external linksLinks
LLM training technologies developed by kwai
☆70Jan 21, 2026Updated 3 weeks ago
Alternatives and similar repositories for Megatron-Kwai
Users that are interested in Megatron-Kwai are comparing it to the libraries listed below
Sorting:
- Ongoing research training transformer models at scale☆18Feb 5, 2026Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆643Jan 15, 2026Updated last month
- Zero Bubble Pipeline Parallelism☆449May 7, 2025Updated 9 months ago
- Sequence-level 1F1B schedule for LLMs.☆38Aug 26, 2025Updated 5 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,247Aug 28, 2025Updated 5 months ago
- Distributed IO-aware Attention algorithm☆24Sep 24, 2025Updated 4 months ago
- Ring attention implementation with flash attention☆980Sep 10, 2025Updated 5 months ago
- these are custom recipes of nvidia nsight system post collection analysis.☆16Nov 7, 2025Updated 3 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated last month
- Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation☆19Jun 11, 2025Updated 8 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆418Aug 21, 2025Updated 5 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,350Feb 9, 2026Updated last week
- Allow torch tensor memory to be released and resumed later☆217Feb 9, 2026Updated last week
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- ☆51Apr 30, 2025Updated 9 months ago
- [EuroSys'25] Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization☆21Feb 5, 2026Updated last week
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆162Jan 22, 2026Updated 3 weeks ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 7 months ago
- ☆42Sep 8, 2025Updated 5 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,527Dec 15, 2025Updated 2 months ago
- Toolchain built around the Megatron-LM for Distributed Training☆86Dec 7, 2025Updated 2 months ago
- ☆38Aug 7, 2025Updated 6 months ago
- ☆22Apr 22, 2024Updated last year
- pytorch code examples for measuring the performance of collective communication calls in AI workloads☆18Sep 18, 2025Updated 4 months ago
- ☆16Mar 30, 2024Updated last year
- ☆77May 4, 2021Updated 4 years ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,652Updated this week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆333Dec 13, 2025Updated 2 months ago
- [Mlsys'22] Understanding gnn computational graph: A coordinated computation, io, and memory perspective☆22Sep 11, 2023Updated 2 years ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆52May 31, 2023Updated 2 years ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆163Updated this week
- A benchmark suited especially for deep learning operators☆42Feb 13, 2023Updated 3 years ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆99Aug 20, 2025Updated 5 months ago
- ☆52May 19, 2025Updated 8 months ago
- CFR implementation of a poker bot.☆12Feb 17, 2023Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago