AntonFriberg / pytorch-cinic-10Links
Pytorch deep learning object detection using CINIC-10 dataset.
☆22Updated 5 years ago
Alternatives and similar repositories for pytorch-cinic-10
Users that are interested in pytorch-cinic-10 are comparing it to the libraries listed below
Sorting:
- FedNAS: Federated Deep Learning via Neural Architecture Search☆54Updated 3 years ago
- Partial implementation of paper "DEEP GRADIENT COMPRESSION: REDUCING THE COMMUNICATION BANDWIDTH FOR DISTRIBUTED TRAINING"☆31Updated 4 years ago
- vector quantization for stochastic gradient descent.☆35Updated 5 years ago
- ☆46Updated 5 years ago
- Sparsified SGD with Memory: https://arxiv.org/abs/1809.07599☆58Updated 6 years ago
- Code for the signSGD paper☆86Updated 4 years ago
- Related material on Federated Learning☆26Updated 5 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆36Updated 2 years ago
- Attentive Federated Learning for Private NLM☆61Updated 10 months ago
- Federated Dynamic Sparse Training☆30Updated 3 years ago
- Code for Neurips 2020 paper "Byzantine Resilient Distributed Multi-Task Learning"☆9Updated 4 years ago
- MiLeNAS: Efficient Neural Architecture Search via Mixed-Level Reformulation. Published in CVPR 2020☆37Updated 4 years ago
- Salvaging Federated Learning by Local Adaptation☆56Updated 10 months ago
- Source code of ICLR2020 submisstion: Zeno++: Robust Fully Asynchronous SGD☆13Updated 5 years ago
- ☆22Updated 4 years ago
- Implementation of (overlap) local SGD in Pytorch☆33Updated 4 years ago
- SGD with compressed gradients and error-feedback: https://arxiv.org/abs/1901.09847☆30Updated 10 months ago
- ☆26Updated 6 years ago
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆14Updated 3 years ago
- InstaHide: Instance-hiding Schemes for Private Distributed Learning☆50Updated 4 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆29Updated 3 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 6 years ago
- Code for paper: Variance Reduced Local SGD with Lower Communication Complexity☆12Updated 5 years ago
- Bayesian Nonparametric Federated Learning of Neural Networks☆143Updated 6 years ago
- Code for "Federated Accelerated Stochastic Gradient Descent" (NeurIPS 2020)☆15Updated 3 years ago
- [ICLR 2018] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training☆222Updated 10 months ago
- Understanding Top-k Sparsification in Distributed Deep Learning☆24Updated 5 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Updated 5 years ago
- ☆74Updated 5 years ago
- [ICLR2022] Efficient Split-Mix federated learning for in-situ model customization during both training and testing time☆43Updated 2 years ago