thunlp / Seq1F1BLinks
Sequence-level 1F1B schedule for LLMs.
☆32Updated last month
Alternatives and similar repositories for Seq1F1B
Users that are interested in Seq1F1B are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆181Updated 2 weeks ago
- ☆100Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆138Updated last month
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆65Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 4 months ago
- Pipeline Parallelism Emulation and Visualization☆68Updated 4 months ago
- ☆65Updated 6 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆45Updated last month
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆221Updated 2 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆67Updated 7 months ago
- ☆107Updated 5 months ago
- ☆148Updated 7 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆60Updated 2 weeks ago
- ☆124Updated 11 months ago
- Implement Flash Attention using Cute.☆96Updated 10 months ago
- ☆59Updated last year
- DeeperGEMM: crazy optimized version☆72Updated 5 months ago
- ☆75Updated 4 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆156Updated 2 weeks ago
- ☆43Updated last year
- ☆153Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 7 months ago
- Tile-based language built for AI computation across all scales☆67Updated 3 weeks ago
- Allow torch tensor memory to be released and resumed later☆150Updated last week
- ATC23 AE☆47Updated 2 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆20Updated last year
- ☆78Updated 6 months ago
- High performance Transformer implementation in C++.☆138Updated 9 months ago
- [EuroSys'25] Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization☆18Updated 2 months ago