sail-sg / zero-bubble-pipeline-parallelismLinks
Zero Bubble Pipeline Parallelism
☆428Updated 4 months ago
Alternatives and similar repositories for zero-bubble-pipeline-parallelism
Users that are interested in zero-bubble-pipeline-parallelism are comparing it to the libraries listed below
Sorting:
- PyTorch bindings for CUTLASS grouped GEMM.☆152Updated last month
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆412Updated this week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆421Updated 4 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆458Updated 5 months ago
- Perplexity GPU Kernels☆476Updated 2 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last week
- Pipeline Parallelism Emulation and Visualization☆67Updated 3 months ago
- ☆298Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆571Updated 2 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆279Updated last year
- Disaggregated serving system for Large Language Models (LLMs).☆697Updated 5 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆64Updated last year
- Allow torch tensor memory to be released and resumed later☆142Updated last week
- A low-latency & high-throughput serving engine for LLMs☆422Updated 4 months ago
- Microsoft Automatic Mixed Precision Library☆622Updated last year
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆248Updated 2 months ago
- A PyTorch Native LLM Training Framework☆874Updated 3 weeks ago
- Distributed Compiler based on Triton for Parallel Systems☆1,148Updated last week
- Ring attention implementation with flash attention☆885Updated 3 weeks ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆216Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆98Updated 2 weeks ago
- ☆121Updated 9 months ago
- Materials for learning SGLang☆594Updated this week
- Applied AI experiments and examples for PyTorch☆296Updated last month
- Ultra and Unified CCL☆567Updated this week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆146Updated 3 years ago
- ☆338Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆123Updated 4 months ago
- ☆147Updated 7 months ago
- ☆238Updated last year