thunlp / Seq1F1BLinks
Sequence-level 1F1B schedule for LLMs.
☆38Updated 4 months ago
Alternatives and similar repositories for Seq1F1B
Users that are interested in Seq1F1B are comparing it to the libraries listed below
Sorting:
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆147Updated 3 months ago
- ☆103Updated last year
- Pipeline Parallelism Emulation and Visualization☆74Updated 6 months ago
- A lightweight design for computation-communication overlap.☆200Updated 2 months ago
- ☆65Updated 8 months ago
- LLM training technologies developed by kwai☆67Updated last month
- Implement Flash Attention using Cute.☆98Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆121Updated 3 months ago
- Nex Venus Communication Library☆66Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆135Updated 6 months ago
- ☆112Updated 7 months ago
- ☆77Updated 4 years ago
- ☆154Updated 9 months ago
- ☆36Updated 2 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆51Updated 2 weeks ago
- Allow torch tensor memory to be released and resumed later☆191Updated 3 weeks ago
- Tile-based language built for AI computation across all scales☆106Updated this week
- DeeperGEMM: crazy optimized version☆73Updated 7 months ago
- ☆58Updated last year
- Distributed MoE in a Single Kernel [NeurIPS '25]☆157Updated this week
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆72Updated 3 months ago
- ☆163Updated last year
- ☆126Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆174Updated last week
- ☆34Updated 9 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- ☆45Updated last year
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆71Updated last week
- ATC23 AE☆47Updated 2 years ago
- NVIDIA cuTile learn☆130Updated 2 weeks ago