DS3Lab / AC-SGD
Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.
☆27Updated last year
Alternatives and similar repositories for AC-SGD:
Users that are interested in AC-SGD are comparing it to the libraries listed below
- PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models. ICML 2021☆55Updated 3 years ago
- ☆49Updated last year
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆32Updated 7 months ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆43Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated last year
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆81Updated last year
- ☆19Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆37Updated 3 months ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆13Updated last year
- ☆23Updated 3 months ago
- ☆92Updated 2 years ago
- ☆41Updated 2 months ago
- ☆27Updated 10 months ago
- ☆41Updated 2 years ago
- ☆49Updated 9 months ago
- [IJCAI2023] An automated parallel training system that combines the advantages from both data and model parallelism. If you have any inte…☆51Updated last year
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆27Updated last year
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆37Updated 2 years ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆40Updated 2 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆32Updated 6 months ago
- ☆36Updated 5 months ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated 11 months ago
- Official Repo for "LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization"☆28Updated 11 months ago
- A minimal implementation of vllm.☆33Updated 6 months ago
- ☆20Updated last year
- ☆31Updated 3 months ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆51Updated last week
- ☆22Updated 4 years ago