Princeton-SysML / GradAttack
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.
☆195Updated last year
Alternatives and similar repositories for GradAttack
Users that are interested in GradAttack are comparing it to the libraries listed below
Sorting:
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆150Updated 4 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆290Updated 2 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆191Updated 3 years ago
- ☆69Updated 2 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆39Updated 3 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 3 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆33Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆74Updated 2 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆82Updated 2 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆144Updated 2 years ago
- Breaching privacy in federated learning scenarios for vision and text☆289Updated last year
- Code for Data Poisoning Attacks Against Federated Learning Systems☆192Updated 3 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆296Updated 9 months ago
- ☆19Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆40Updated 4 months ago
- ☆54Updated 3 years ago
- ☆158Updated 2 years ago
- ☆44Updated 3 years ago
- Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).☆12Updated 6 months ago
- Amortized version of the differentially private SGD algorithm published in "Deep Learning with Differential Privacy" by Abadi et al. Enfo…☆41Updated last year
- ☆27Updated last year
- Learning from history for Byzantine Robustness☆23Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆151Updated 2 years ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆31Updated 3 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆33Updated 7 months ago
- This repo implements several algorithms for learning with differential privacy.☆108Updated 2 years ago
- FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).☆15Updated 10 months ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆56Updated 2 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆60Updated 6 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆62Updated 7 months ago