GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.
☆200May 7, 2024Updated last year
Alternatives and similar repositories for GradAttack
Users that are interested in GradAttack are comparing it to the libraries listed below
Sorting:
- Algorithms to recover input data from their gradient signal through a neural network☆314Apr 14, 2023Updated 2 years ago
- Breaching privacy in federated learning scenarios for vision and text☆314Jan 24, 2026Updated last month
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Mar 13, 2023Updated 2 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆474Apr 17, 2022Updated 3 years ago
- ☆10Apr 21, 2022Updated 3 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆166Mar 4, 2021Updated 4 years ago
- ☆47Dec 29, 2021Updated 4 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆62Oct 24, 2022Updated 3 years ago
- ☆36Jan 5, 2022Updated 4 years ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Jul 7, 2022Updated 3 years ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆37Jun 10, 2024Updated last year
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- End-to-End Gradient Inversion (Gradient Leakage in Federated Learning) 【https://ieeexplore.ieee.org/document/9878027】☆11Aug 19, 2022Updated 3 years ago
- ☆16Apr 16, 2019Updated 6 years ago
- Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)☆422Jan 9, 2026Updated last month
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- ☆32Sep 2, 2024Updated last year
- ☆45Nov 10, 2019Updated 6 years ago
- Code repo for the UAI 2023 paper "Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning".☆16Jun 15, 2024Updated last year
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆202Aug 5, 2021Updated 4 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆66Oct 4, 2024Updated last year
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆15Oct 13, 2023Updated 2 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆86Jun 6, 2020Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆16Jan 18, 2025Updated last year
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆29May 26, 2025Updated 9 months ago
- ☆26Dec 14, 2021Updated 4 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Apr 3, 2024Updated last year
- Simplicial-FL to manage client device heterogeneity in Federated Learning☆22Aug 3, 2023Updated 2 years ago
- Implementation of dp-based federated learning framework using PyTorch☆315Jan 3, 2026Updated last month
- ☆15Jan 16, 2024Updated 2 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Apr 19, 2021Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated last year
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆700Apr 26, 2025Updated 10 months ago
- Code for ML Doctor☆92Aug 14, 2024Updated last year