The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.
☆63Feb 2, 2023Updated 3 years ago
Alternatives and similar repositories for GAN-Attack-against-Federated-Deep-Learning
Users that are interested in GAN-Attack-against-Federated-Deep-Learning are comparing it to the libraries listed below
Sorting:
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- wx☆11Aug 14, 2022Updated 3 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆314Apr 14, 2023Updated 2 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆475Apr 17, 2022Updated 3 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- ☆19Dec 7, 2020Updated 5 years ago
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 5 months ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆85Jun 27, 2023Updated 2 years ago
- Research simulation toolkit for federated learning☆13Nov 7, 2020Updated 5 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Oct 25, 2019Updated 6 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆378Feb 5, 2023Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- ☆12Nov 26, 2019Updated 6 years ago
- Official repo of the paper Deep Regression Unlearning accepted in ICML 2023☆14Jun 14, 2023Updated 2 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- ☆52Aug 28, 2021Updated 4 years ago
- Breaching privacy in federated learning scenarios for vision and text☆313Jan 24, 2026Updated last month
- Code for NDSS '25 paper "Passive Inference Attacks on Split Learning via Adversarial Regularization"☆13Sep 16, 2024Updated last year
- Source code of ICLR2020 submisstion: Zeno++: Robust Fully Asynchronous SGD☆14Feb 2, 2020Updated 6 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆203Aug 5, 2021Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆86Feb 23, 2023Updated 3 years ago
- Ditto: Fair and Robust Federated Learning Through Personalization (ICML '21)☆158Apr 30, 2022Updated 3 years ago
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Aug 22, 2020Updated 5 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Sep 24, 2021Updated 4 years ago
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆31Oct 15, 2017Updated 8 years ago
- Code for Double Blind CollaborativeLearning (DBCL)☆14May 14, 2021Updated 4 years ago
- ⚔️ Blades: A Unified Benchmark Suite for Attacks and Defenses in Federated Learning☆156Feb 16, 2025Updated last year
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆315Jul 25, 2024Updated last year
- Multi-Stage Hybrid Federated Learning over Large-Scale Wireless Fog Networks☆17Jan 25, 2022Updated 4 years ago
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- Single Image Backdoor Inversion via Robust Smoothed Classifiers☆17Jul 18, 2023Updated 2 years ago