Princeton-SysML / GradAttackLinks
GradAttack is a Python library for easy evaluation of privacy risks in public gradients in Federated Learning, as well as corresponding mitigation strategies.
☆195Updated last year
Alternatives and similar repositories for GradAttack
Users that are interested in GradAttack are comparing it to the libraries listed below
Sorting:
- Algorithms to recover input data from their gradient signal through a neural network☆290Updated 2 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆151Updated 4 years ago
- Breaching privacy in federated learning scenarios for vision and text☆293Updated last year
- simple Differential Privacy in PyTorch☆48Updated 5 years ago
- ☆54Updated 3 years ago
- Learning from history for Byzantine Robustness☆23Updated 3 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆56Updated 2 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆40Updated 3 years ago
- Papers related to federated learning in top conferences (2020-2024).☆69Updated 7 months ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆58Updated 2 years ago
- This repo implements several algorithms for learning with differential privacy.☆107Updated 2 years ago
- Codebase for An Efficient Framework for Clustered Federated Learning.☆117Updated 4 years ago
- [NeurIPS 2022] JAX/Haiku implementation of "On Privacy and Personalization in Cross-Silo Federated Learning"☆27Updated 2 years ago
- ☆158Updated 2 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆56Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆73Updated 2 years ago
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆44Updated last year
- Gradient-Leakage Resilient Federated Learning☆13Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆72Updated 3 years ago
- Official Implementation of "Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning"☆9Updated 3 months ago
- A sybil-resilient distributed learning protocol.☆104Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆145Updated 2 years ago
- ☆14Updated last year
- ☆55Updated 2 years ago
- TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients r…☆10Updated 6 months ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆150Updated 2 years ago
- PyTorch implementation of Federated Learning algorithms FedSGD, FedAvg, FedAvgM, FedIR, FedVC, FedProx and standard SGD, applied to visua…☆76Updated 3 years ago
- [ICLR2023] Official Implementation of Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (http…☆69Updated 2 years ago
- ☆20Updated 6 years ago
- (NeurIPS 2022) Official Implementation of "Preservation of the Global Knowledge by Not-True Distillation in Federated Learning"☆86Updated 2 years ago