thunlp / Seq1F1B
Sequence-level 1F1B schedule for LLMs.
☆19Updated 4 months ago
Alternatives and similar repositories for Seq1F1B:
Users that are interested in Seq1F1B are comparing it to the libraries listed below
- ☆72Updated 4 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆53Updated 9 months ago
- ☆96Updated 5 months ago
- nnScaler: Compiling DNN models for Parallel Training☆109Updated last week
- ☆53Updated last year
- ☆93Updated 8 months ago
- ☆57Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆88Updated last week
- A lightweight design for computation-communication overlap.☆67Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆120Updated 4 months ago
- ☆66Updated 2 weeks ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆34Updated 2 weeks ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆62Updated last month
- ☆79Updated 2 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 5 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆44Updated last month
- ☆59Updated 10 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆248Updated 6 months ago
- Sequence-level 1F1B schedule for LLMs.☆17Updated 11 months ago
- ☆143Updated 9 months ago
- ☆82Updated 3 years ago
- ☆70Updated 4 months ago
- ☆104Updated last month
- Implement Flash Attention using Cute.☆78Updated 4 months ago
- DeeperGEMM: crazy optimized version☆68Updated this week
- 16-fold memory access reduction with nearly no loss☆91Updated last month
- ATC23 AE☆45Updated last year
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆64Updated 8 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆209Updated 8 months ago
- ☆74Updated 3 weeks ago