Victarry / PP-Schedule-VisualizationLinks
Pipeline Parallelism Emulation and Visualization
☆68Updated 4 months ago
Alternatives and similar repositories for PP-Schedule-Visualization
Users that are interested in PP-Schedule-Visualization are comparing it to the libraries listed below
Sorting:
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 4 months ago
- ☆309Updated 3 weeks ago
- Zero Bubble Pipeline Parallelism☆433Updated 5 months ago
- A lightweight design for computation-communication overlap.☆181Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆156Updated 2 weeks ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆66Updated last year
- Allow torch tensor memory to be released and resumed later☆157Updated this week
- A collection of memory efficient attention operators implemented in the Triton language.☆282Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆117Updated last month
- ☆100Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 11 months ago
- ☆148Updated 7 months ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆139Updated last month
- Sequence-level 1F1B schedule for LLMs.☆32Updated 2 months ago
- ☆97Updated 7 months ago
- ☆141Updated 10 months ago
- Utility scripts for PyTorch (e.g. Memory profiler that understands more low-level allocations such as NCCL)☆62Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆266Updated 3 months ago
- High Performance Grouped GEMM in PyTorch☆31Updated 3 years ago
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆439Updated this week
- ☆241Updated last year
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆249Updated 3 months ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆63Updated this week
- ☆107Updated 5 months ago
- 🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.☆226Updated 2 months ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆114Updated last week
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆124Updated 5 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆264Updated this week
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- ☆43Updated last year