inspire-group / ModelPoisoning
Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470
☆147Updated 2 years ago
Alternatives and similar repositories for ModelPoisoning:
Users that are interested in ModelPoisoning are comparing it to the libraries listed below
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆140Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆71Updated 3 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆183Updated 3 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆47Updated 2 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆180Updated 3 years ago
- A sybil-resilient distributed learning protocol.☆100Updated last year
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆147Updated 3 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆287Updated 6 months ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆83Updated 4 years ago
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆49Updated 6 years ago
- ☆54Updated 3 years ago
- Ditto: Fair and Robust Federated Learning Through Personalization (ICML '21)☆139Updated 2 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆130Updated 2 years ago
- Curated notebooks on how to train neural networks using differential privacy and federated learning.☆66Updated 4 years ago
- Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent (ICLR 2021)☆20Updated 3 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- simple Differential Privacy in PyTorch☆48Updated 4 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆39Updated 3 years ago
- [NeurIPS 2019 FL workshop] Federated Learning with Local and Global Representations☆230Updated 6 months ago
- ☆15Updated 5 years ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆26Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆58Updated 5 years ago
- A Simulator for Privacy Preserving Federated Learning☆93Updated 4 years ago
- Implementation of dp-based federated learning framework using PyTorch☆291Updated last year
- Official implementation of our work "Collaborative Fairness in Federated Learning."☆50Updated 8 months ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆31Updated 2 years ago
- An implementation of Deep Learning with Differential Privacy☆24Updated last year
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Updated 4 years ago
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆49Updated 3 years ago