MayDomine / Seq1F1B
Sequence-level 1F1B schedule for LLMs.
☆17Updated 10 months ago
Alternatives and similar repositories for Seq1F1B:
Users that are interested in Seq1F1B are comparing it to the libraries listed below
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆75Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆81Updated 5 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 8 months ago
- Distributed IO-aware Attention algorithm☆19Updated 7 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆209Updated 8 months ago
- ☆68Updated 4 months ago
- nnScaler: Compiling DNN models for Parallel Training☆106Updated 2 months ago
- ☆53Updated last year
- ☆95Updated 5 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆118Updated 3 months ago
- ☆82Updated 3 years ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆101Updated last month
- 16-fold memory access reduction with nearly no loss☆89Updated 3 weeks ago
- Implement some method of LLM KV Cache Sparsity☆31Updated 10 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆113Updated 4 months ago
- Sequence-level 1F1B schedule for LLMs.☆18Updated 3 months ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆51Updated 9 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆85Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- ☆92Updated 7 months ago
- ATC23 AE☆45Updated last year
- Vocabulary Parallelism☆17Updated last month
- ☆54Updated last week
- ☆48Updated 4 months ago
- A simple calculation for LLM MFU.☆34Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆85Updated this week
- ☆72Updated 3 years ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated 9 months ago
- ☆81Updated 3 weeks ago