Victarry / PP-Schedule-VisualizationLinks
Pipeline Parallelism Emulation and Visualization
☆43Updated last week
Alternatives and similar repositories for PP-Schedule-Visualization
Users that are interested in PP-Schedule-Visualization are comparing it to the libraries listed below
Sorting:
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- A lightweight design for computation-communication overlap.☆143Updated this week
- ☆96Updated 9 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆127Updated 5 months ago
- Sequence-level 1F1B schedule for LLMs.☆28Updated this week
- ☆86Updated 2 months ago
- ☆60Updated last month
- nnScaler: Compiling DNN models for Parallel Training☆113Updated this week
- High performance Transformer implementation in C++.☆125Updated 5 months ago
- ☆141Updated 3 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆79Updated 7 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆80Updated last month
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆100Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆99Updated 3 weeks ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆252Updated 7 months ago
- ☆90Updated 5 months ago
- Implement Flash Attention using Cute.☆87Updated 6 months ago
- Examples of CUDA implementations by Cutlass CuTe☆197Updated 4 months ago
- DeeperGEMM: crazy optimized version☆69Updated last month
- ☆77Updated last month
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆183Updated 4 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated 3 weeks ago
- ☆74Updated 4 years ago
- ☆212Updated 11 months ago
- ☆63Updated this week
- ☆139Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago
- ☆117Updated last month
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆71Updated 10 months ago