LLM training technologies developed by kwai
☆70Jan 21, 2026Updated last month
Alternatives and similar repositories for Megatron-Kwai
Users that are interested in Megatron-Kwai are comparing it to the libraries listed below
Sorting:
- Ongoing research training transformer models at scale☆18Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆644Jan 15, 2026Updated last month
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago
- Sequence-level 1F1B schedule for LLMs.☆38Aug 26, 2025Updated 6 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆984Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,264Aug 28, 2025Updated 6 months ago
- Distributed IO-aware Attention algorithm☆24Sep 24, 2025Updated 5 months ago
- Ring attention implementation with flash attention☆987Sep 10, 2025Updated 5 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆93Jan 16, 2026Updated last month
- Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation☆19Jun 11, 2025Updated 8 months ago
- Bamboo is a system for running large pipeline-parallel DNNs affordably, reliably, and efficiently using spot instances.☆55Dec 11, 2022Updated 3 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 3 weeks ago
- Allow torch tensor memory to be released and resumed later☆220Feb 9, 2026Updated last month
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- ☆51Apr 30, 2025Updated 10 months ago
- [EuroSys'25] Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization☆21Feb 5, 2026Updated last month
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆167Jan 22, 2026Updated last month
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- ☆42Sep 8, 2025Updated 6 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,534Dec 15, 2025Updated 2 months ago
- ☆16Mar 30, 2024Updated last year
- ☆22Apr 22, 2024Updated last year
- pytorch code examples for measuring the performance of collective communication calls in AI workloads☆19Sep 18, 2025Updated 5 months ago
- ☆38Aug 7, 2025Updated 7 months ago
- Toolchain built around the Megatron-LM for Distributed Training☆89Dec 7, 2025Updated 3 months ago
- ☆78May 4, 2021Updated 4 years ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,701Updated this week
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆334Dec 13, 2025Updated 2 months ago
- [Mlsys'22] Understanding gnn computational graph: A coordinated computation, io, and memory perspective☆22Sep 11, 2023Updated 2 years ago
- A benchmark suited especially for deep learning operators☆42Feb 13, 2023Updated 3 years ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆165Feb 11, 2026Updated 3 weeks ago
- CFR implementation of a poker bot.☆12Feb 17, 2023Updated 3 years ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆99Aug 20, 2025Updated 6 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- Pipeline Parallelism Emulation and Visualization☆79Jan 8, 2026Updated 2 months ago
- NCCL Profiling Kit☆152Jul 1, 2024Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆185Feb 19, 2026Updated 2 weeks ago