UofT-EcoSystem / Tempo
Memory footprint reduction for transformer models
☆11Updated 2 years ago
Alternatives and similar repositories for Tempo:
Users that are interested in Tempo are comparing it to the libraries listed below
- ☆72Updated 3 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆35Updated this week
- ☆42Updated 2 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆38Updated 2 years ago
- Python package for rematerialization-aware gradient checkpointing☆24Updated last year
- ☆29Updated last year
- ☆9Updated last year
- pytorch-profiler☆51Updated last year
- 16-fold memory access reduction with nearly no loss☆90Updated 3 weeks ago
- Quantized Attention on GPU☆45Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆81Updated 5 months ago
- ☆40Updated 9 months ago
- An Attention Superoptimizer☆21Updated 3 months ago
- ☆48Updated 4 months ago
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆47Updated 9 months ago
- ☆59Updated 10 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆85Updated last year
- ☆38Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆46Updated 5 months ago
- ☆24Updated last year
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Updated 7 months ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 3 years ago
- ☆43Updated last year
- ☆22Updated last year
- Supplemental materials for The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning☆23Updated 4 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆46Updated 5 months ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆62Updated last month
- ☆69Updated this week