hwang595 / DETOX
DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation
☆16Updated 4 years ago
Alternatives and similar repositories for DETOX:
Users that are interested in DETOX are comparing it to the libraries listed below
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆26Updated last year
- ☆15Updated 5 years ago
- ☆31Updated 5 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 3 years ago
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Updated 4 years ago
- Official implementation of our work "Collaborative Fairness in Federated Learning."☆51Updated 11 months ago
- Salvaging Federated Learning by Local Adaptation☆56Updated 9 months ago
- Robust aggregation for federated learning with the RFA algorithm.☆48Updated 2 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆31Updated 2 years ago
- Attentive Federated Learning for Private NLM☆61Updated 9 months ago
- Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent (ICLR 2021)☆21Updated 4 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆56Updated 2 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆33Updated 4 years ago
- Adaptive gradient sparsification for efficient federated learning: an online learning approach