Victarry / PP-Schedule-VisualizationLinks
Pipeline Parallelism Emulation and Visualization
☆70Updated 5 months ago
Alternatives and similar repositories for PP-Schedule-Visualization
Users that are interested in PP-Schedule-Visualization are comparing it to the libraries listed below
Sorting:
- ☆316Updated last week
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆66Updated last year
- Zero Bubble Pipeline Parallelism☆433Updated 6 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆165Updated last month
- A lightweight design for computation-communication overlap.☆187Updated last month
- Allow torch tensor memory to be released and resumed later☆167Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆127Updated 5 months ago
- nnScaler: Compiling DNN models for Parallel Training☆119Updated last month
- A collection of memory efficient attention operators implemented in the Triton language.☆284Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated last year
- ☆102Updated last year
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆143Updated 2 months ago
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆67Updated 2 months ago
- Sequence-level 1F1B schedule for LLMs.☆37Updated 2 months ago
- ☆151Updated 8 months ago
- ☆97Updated 7 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆252Updated 4 months ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆483Updated this week
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆128Updated last week
- ☆148Updated 10 months ago
- ☆44Updated last year
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆67Updated 2 weeks ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆436Updated 5 months ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆125Updated last month
- ☆65Updated 6 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆127Updated 6 months ago
- Implement Flash Attention using Cute.☆96Updated 11 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆82Updated this week
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- ☆243Updated last year