nikitaivkin / csh
Simple Hierarchical Count Sketch in Python
☆20Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for csh
- Sketched SGD☆28Updated 4 years ago
- A compressed adaptive optimizer for training large-scale deep learning models using PyTorch☆27Updated 4 years ago
- ☆27Updated last year
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆56Updated 6 years ago
- Code for the signSGD paper☆81Updated 3 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆29Updated 3 months ago
- ☆43Updated 4 years ago
- ☆12Updated 3 years ago
- Federated posterior averaging implemented in JAX☆49Updated last year
- PyTorch for benchmarking communication-efficient distributed SGD optimization algorithms☆72Updated 3 years ago
- InstaHide: Instance-hiding Schemes for Private Distributed Learning☆50Updated 4 years ago
- Decentralized SGD and Consensus with Communication Compression: https://arxiv.org/abs/1907.09356☆64Updated 4 years ago
- ☆74Updated 5 years ago
- Implementation of (overlap) local SGD in Pytorch☆32Updated 4 years ago
- Salvaging Federated Learning by Local Adaptation☆56Updated 3 months ago
- FedDANE: A Federated Newton-Type Method (Asilomar Conference on Signals, Systems, and Computers ‘19)☆24Updated last year
- Simplicial-FL to manage client device heterogeneity in Federated Learning☆21Updated last year
- CoLa - Decentralized Linear Learning: https://arxiv.org/abs/1808.04883☆19Updated 2 years ago
- vector quantization for stochastic gradient descent.☆33Updated 4 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 2 years ago
- Atomo: Communication-efficient Learning via Atomic Sparsification☆25Updated 5 years ago
- FedNAS: Federated Deep Learning via Neural Architecture Search☆52Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆34Updated last year
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 3 years ago
- Learning rate adaptation for differentially private stochastic gradient descent☆16Updated 3 years ago
- DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation☆16Updated 4 years ago
- ☆13Updated last year
- ☆14Updated 10 months ago
- Code for "Federated Accelerated Stochastic Gradient Descent" (NeurIPS 2020)☆14Updated 3 years ago