[NeurIPS 2019] Deep Leakage From Gradients
☆475Apr 17, 2022Updated 3 years ago
Alternatives and similar repositories for dlg
Users that are interested in dlg are comparing it to the libraries listed below
Sorting:
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 5 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆314Apr 14, 2023Updated 2 years ago
- paper code☆28Oct 5, 2020Updated 5 years ago
- Breaching privacy in federated learning scenarios for vision and text☆314Jan 24, 2026Updated last month
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding m…☆200May 7, 2024Updated last year
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆203Aug 5, 2021Updated 4 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆315Jul 25, 2024Updated last year
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆15Jan 18, 2023Updated 3 years ago
- ☆45Nov 10, 2019Updated 6 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Mar 13, 2023Updated 2 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆49Dec 17, 2019Updated 6 years ago
- Implementation of dp-based federated learning framework using PyTorch☆315Jan 3, 2026Updated 2 months ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆700Apr 26, 2025Updated 10 months ago
- ☆36Jan 5, 2022Updated 4 years ago
- Simulate a federated setting and run differentially private federated learning.☆387Mar 7, 2025Updated 11 months ago
- A PyTorch Implementation of Federated Learning☆1,505Jul 25, 2024Updated last year
- ☆21Oct 25, 2021Updated 4 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆378Feb 5, 2023Updated 3 years ago
- ☆47Dec 29, 2021Updated 4 years ago
- FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai☆2,002Sep 3, 2022Updated 3 years ago
- Code for Federated Learning with Matched Averaging, ICLR 2020.☆343Dec 5, 2021Updated 4 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆149Aug 6, 2022Updated 3 years ago
- Leaf: A Benchmark for Federated Settings☆900Mar 24, 2023Updated 2 years ago
- Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data☆1,431May 7, 2024Updated last year
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆84Feb 26, 2023Updated 3 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Nov 22, 2021Updated 4 years ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆136Dec 8, 2022Updated 3 years ago
- Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures☆58Sep 28, 2025Updated 5 months ago
- autodp: A flexible and easy-to-use package for differential privacy☆278Dec 5, 2023Updated 2 years ago
- Code and data accompanying the FedGen paper☆258Oct 31, 2024Updated last year