DS3Lab / AC-SGDLinks
Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.
☆28Updated 2 years ago
Alternatives and similar repositories for AC-SGD
Users that are interested in AC-SGD are comparing it to the libraries listed below
Sorting:
- ☆20Updated 2 years ago
- ☆59Updated last year
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆44Updated last year
- ☆94Updated 3 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆56Updated 4 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆90Updated 2 years ago
- ☆42Updated 2 years ago
- ☆19Updated 2 years ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆39Updated last year
- ☆56Updated 9 months ago
- ☆38Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year
- ☆52Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆78Updated 10 months ago
- 16-fold memory access reduction with nearly no loss☆105Updated 5 months ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆12Updated 2 months ago
- ☆33Updated last year
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆48Updated last year
- ☆29Updated 10 months ago
- ☆100Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆18Updated 9 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆52Updated 9 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆120Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆80Updated 2 months ago
- ☆86Updated 3 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆68Updated 9 months ago