jgshu / Attacks-and-Defenses-in-Federated-LearningView external linksLinks
☆37Apr 9, 2021Updated 4 years ago
Alternatives and similar repositories for Attacks-and-Defenses-in-Federated-Learning
Users that are interested in Attacks-and-Defenses-in-Federated-Learning are comparing it to the libraries listed below
Sorting:
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆28Jun 29, 2023Updated 2 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- ☆31Apr 8, 2020Updated 5 years ago
- Source code of FedAttack.☆11Feb 9, 2022Updated 4 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- Model Poisoning Attack to Federated Recommendation☆32Apr 23, 2022Updated 3 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆148Aug 6, 2022Updated 3 years ago
- Backdoor detection in Federated learning with similarity measurement☆26Apr 30, 2022Updated 3 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- ☆20Oct 28, 2025Updated 3 months ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆202Aug 5, 2021Updated 4 years ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆44Oct 29, 2021Updated 4 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆313Jul 25, 2024Updated last year
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆85Feb 23, 2023Updated 2 years ago
- The official code for ICML 2024 "FedREDefense: Defending against Model Poisoning Attacks for Federated Learning using Model Update Recons…☆29Jun 6, 2024Updated last year
- ☆73Jun 7, 2022Updated 3 years ago
- Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing☆14Feb 18, 2021Updated 4 years ago
- [ICLR 2025] REFINE: Inversion-Free Backdoor Defense via Model Reprogramming☆12Feb 13, 2025Updated last year
- ☆27Nov 9, 2022Updated 3 years ago
- ☆12Dec 9, 2020Updated 5 years ago
- Fast integration of backdoor attacks in federated learning with updated attacks and defenses.☆59Jan 19, 2026Updated 3 weeks ago
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Aug 22, 2020Updated 5 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆15Jan 18, 2023Updated 3 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆313Apr 14, 2023Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Jun 3, 2024Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆34Oct 11, 2024Updated last year
- ☆18Jun 15, 2021Updated 4 years ago
- A sybil-resilient distributed learning protocol.☆110Sep 9, 2025Updated 5 months ago
- ☆15Dec 7, 2023Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Dec 24, 2023Updated 2 years ago
- competition☆17Aug 1, 2020Updated 5 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆378Feb 5, 2023Updated 3 years ago