epfml / powergossipLinks
Code for "Practical Low-Rank Communication Compression in Decentralized Deep Learning"
☆17Updated 5 years ago
Alternatives and similar repositories for powergossip
Users that are interested in powergossip are comparing it to the libraries listed below
Sorting:
- Sketched SGD☆28Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆74Updated 5 years ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Updated last year
- Code for the signSGD paper☆92Updated 4 years ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Updated 2 years ago
- ☆33Updated 6 years ago
- ☆37Updated 3 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 3 years ago
- This repository is the official implementation of 'EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Lea…☆14Updated 3 years ago
- Code related to ’Beyond spectral gap: The role of the topology in decentralized learning‘.☆13Updated 3 years ago
- ☆19Updated 2 years ago
- ☆46Updated 5 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated 2 years ago
- Efficient LLM Inference Acceleration using Prompting☆51Updated last year
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- Model Fusion via Optimal Transport, NeurIPS 2020☆151Updated 3 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆32Updated last year
- Code for Double Blind CollaborativeLearning (DBCL)☆14Updated 4 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- Communication-efficient decentralized SGD (Pytorch)☆25Updated 5 years ago
- Code for "Adaptive Gradient Quantization for Data-Parallel SGD", published in NeurIPS 2020.☆30Updated 4 years ago
- ☆77Updated 6 years ago
- ☆27Updated 3 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 5 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Updated 2 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 5 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- Code for the paper "Secure Distributed Training at Scale" (ICML 2022)☆16Updated 11 months ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago