epfml / powergossipLinks
Code for "Practical Low-Rank Communication Compression in Decentralized Deep Learning"
☆16Updated 4 years ago
Alternatives and similar repositories for powergossip
Users that are interested in powergossip are comparing it to the libraries listed below
Sorting:
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆147Updated 8 months ago
- Sketched SGD☆28Updated 5 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆71Updated 4 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 3 years ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆15Updated last year
- Federated Learning Framework Benchmark (UniFed)☆49Updated 2 years ago
- Federated posterior averaging implemented in JAX☆51Updated 2 years ago
- ☆46Updated 5 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- Model Fusion via Optimal Transport, NeurIPS 2020☆148Updated 2 years ago
- This repository is the official implementation of 'EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Lea…☆14Updated 2 years ago
- Communication-efficient decentralized SGD (Pytorch)☆25Updated 5 years ago
- Efficient LLM Inference Acceleration using Prompting☆48Updated 8 months ago
- ☆22Updated 2 years ago
- DP-FTRL from "Practical and Private (Deep) Learning without Sampling or Shuffling" for centralized training.☆29Updated last month
- Simplicial-FL to manage client device heterogeneity in Federated Learning☆22Updated last year
- ☆27Updated last year
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- ☆27Updated 2 years ago
- ☆37Updated 3 years ago
- ☆20Updated 2 years ago
- Code for the signSGD paper☆87Updated 4 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- Simple Hierarchical Count Sketch in Python☆21Updated 4 years ago
- Code for "Federated Accelerated Stochastic Gradient Descent" (NeurIPS 2020)☆15Updated 4 years ago
- ☆33Updated 5 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Libraries for efficient and scalable group-structured dataset pipelines.☆26Updated 3 weeks ago
- Code for testing DCT plus Sparse (DCTpS) networks☆14Updated 4 years ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 5 years ago