vineeths96 / Gradient-CompressionLinks
We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while maintaining the performance of vanilla SGD. We empirically evaluate the performance of the compression methods by training deep neural networks on the CIFAR10 dataset.
☆10Updated 3 years ago
Alternatives and similar repositories for Gradient-Compression
Users that are interested in Gradient-Compression are comparing it to the libraries listed below
Sorting:
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆26Updated 2 years ago
- ☆10Updated 4 years ago
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- This repository is the official implementation of 'EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Lea…☆14Updated 2 years ago
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- [ACM SoCC'22] Pisces: Efficient Federated Learning via Guided Asynchronous Training☆12Updated last month
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Updated 3 years ago
- ☆33Updated 5 years ago
- gTop-k S-SGD: A Communication-Efficient Distributed Synchronous SGD Algorithm for Deep Learning☆36Updated 5 years ago
- ☆14Updated 3 years ago
- Federated Dynamic Sparse Training☆30Updated 3 years ago
- Federated Learning Framework Benchmark (UniFed)☆49Updated last year
- [ICDCS 2023] DeAR: Accelerating Distributed Deep Learning with Fine-Grained All-Reduce Pipelining☆11Updated last year
- A computation-parallel deep learning architecture.☆13Updated 5 years ago
- Create tiny ML systems for on-device learning.☆20Updated 3 years ago
- LotteryFL: Empower Edge Intelligence with Personalized and Communication-Efficient Federated Learning (2021 IEEE/ACM Symposium on Edge Co…☆40Updated 2 years ago
- ☆11Updated 5 months ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- Federated Learning Systems Paper List☆73Updated last year
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆14Updated 3 years ago
- Dual-way gradient sparsification approach for async DNN training, based on PyTorch.☆11Updated 2 years ago
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- ☆15Updated 3 years ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆13Updated last year
- Layer-wise Sparsification of Distributed Deep Learning☆10Updated 4 years ago
- ☆19Updated 3 years ago
- Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)☆9Updated last year