RulinShao / FastCkptLinks
Python package for rematerialization-aware gradient checkpointing
☆24Updated last year
Alternatives and similar repositories for FastCkpt
Users that are interested in FastCkpt are comparing it to the libraries listed below
Sorting:
- ☆73Updated 4 years ago
- ☆92Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆208Updated 9 months ago
- ☆83Updated 3 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- ☆27Updated 3 years ago
- ☆38Updated last year
- ☆105Updated 9 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 11 months ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Updated 8 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆86Updated 2 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated 2 years ago
- ☆19Updated 2 years ago
- Stateful LLM Serving☆70Updated 2 months ago
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 3 weeks ago
- ☆62Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆93Updated last month
- Memory footprint reduction for transformer models☆11Updated 2 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆75Updated 8 months ago
- A resilient distributed training framework☆95Updated last year
- ☆99Updated 6 months ago
- A simple calculation for LLM MFU.☆38Updated 2 months ago
- ☆79Updated 6 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆48Updated 6 months ago
- Scalable long-context LLM decoding that leverages sparsity—by treating the KV cache as a vector storage system.☆41Updated this week
- nnScaler: Compiling DNN models for Parallel Training☆113Updated last month
- 16-fold memory access reduction with nearly no loss☆94Updated 2 months ago
- ☆76Updated last month
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆63Updated 2 months ago
- ☆9Updated last year