Princeton-SysML / GradAttack
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.
☆188Updated 9 months ago
Alternatives and similar repositories for GradAttack:
Users that are interested in GradAttack are comparing it to the libraries listed below
- Algorithms to recover input data from their gradient signal through a neural network☆283Updated last year
- Papers related to federated learning in top conferences (2020-2024).☆68Updated 4 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆69Updated last year
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆30Updated 2 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆148Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆75Updated 2 years ago
- Learning from history for Byzantine Robustness☆22Updated 3 years ago
- ☆54Updated 2 years ago
- This repository contains the official implementation for the manuscript: Make Landscape Flatter in Differentially Private Federated Lear…☆46Updated last year
- Webank AI☆40Updated 2 weeks ago
- ☆34Updated 3 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆141Updated 2 years ago
- Breaching privacy in federated learning scenarios for vision and text☆280Updated 10 months ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆39Updated 3 years ago
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆44Updated last year
- ☆13Updated last year
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆184Updated 3 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆71Updated 3 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆181Updated 3 years ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆58Updated 5 years ago
- ☆15Updated 5 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆47Updated 2 years ago
- ☆156Updated 2 years ago
- Privacy attacks on Split Learning☆37Updated 3 years ago
- FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.☆189Updated 10 months ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆62Updated 4 years ago
- ☆54Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆147Updated 2 years ago
- ☆31Updated 4 years ago