thunlp / Seq1F1BLinks
Sequence-level 1F1B schedule for LLMs.
☆38Updated 5 months ago
Alternatives and similar repositories for Seq1F1B
Users that are interested in Seq1F1B are comparing it to the libraries listed below
Sorting:
- A lightweight design for computation-communication overlap.☆219Updated 2 weeks ago
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆158Updated 4 months ago
- ☆105Updated last year
- LLM training technologies developed by kwai☆70Updated 2 weeks ago
- Pipeline Parallelism Emulation and Visualization☆77Updated 3 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- ☆65Updated 9 months ago
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆66Updated last month
- Nex Venus Communication Library☆72Updated 2 months ago
- Tile-based language built for AI computation across all scales☆119Updated last week
- ☆113Updated 8 months ago
- Implement Flash Attention using Cute.☆100Updated last year
- ☆77Updated 4 years ago
- ☆41Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆141Updated 8 months ago
- ☆85Updated 9 months ago
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆82Updated 4 months ago
- ☆155Updated 11 months ago
- ☆130Updated last year
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆59Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- DeeperGEMM: crazy optimized version☆73Updated 8 months ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆151Updated this week
- Allow torch tensor memory to be released and resumed later☆213Updated 3 weeks ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Updated last year
- ATC23 AE☆46Updated 2 years ago
- DeepXTrace is a lightweight tool for precisely diagnosing slow ranks in DeepEP-based environments.☆91Updated 2 weeks ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆79Updated last month
- ☆47Updated last year
- High performance Transformer implementation in C++.☆150Updated last year