RulinShao / FastCkpt
Python package for rematerialization-aware gradient checkpointing
☆24Updated last year
Alternatives and similar repositories for FastCkpt:
Users that are interested in FastCkpt are comparing it to the libraries listed below
- ☆72Updated 3 years ago
- ☆38Updated last year
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆209Updated 8 months ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- ☆82Updated 3 years ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆85Updated last year
- ☆48Updated 4 months ago
- Sequence-level 1F1B schedule for LLMs.☆17Updated 10 months ago
- ☆103Updated 7 months ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Updated 7 months ago
- A resilient distributed training framework☆94Updated last year
- Stateful LLM Serving☆63Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆81Updated 5 months ago
- ☆95Updated 5 months ago
- ☆59Updated 10 months ago
- A simple calculation for LLM MFU.☆34Updated last month
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 4 months ago
- ☆93Updated 2 years ago
- Memory footprint reduction for transformer models☆11Updated 2 years ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆46Updated 5 months ago
- nnScaler: Compiling DNN models for Parallel Training☆106Updated 2 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆36Updated 4 months ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆33Updated 2 weeks ago
- ☆42Updated 2 years ago
- ☆27Updated 3 years ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆17Updated last year
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆70Updated last month
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆153Updated 7 months ago