FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).
☆15Jul 6, 2024Updated last year
Alternatives and similar repositories for FedDefender
Users that are interested in FedDefender are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients r…☆10Nov 12, 2024Updated last year
- Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).☆13Oct 25, 2024Updated last year
- A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)☆31Jan 1, 2024Updated 2 years ago
- Official repository of the paper "Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning".☆12Mar 28, 2022Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- A PyTorch based repository for Federate Learning with Differential Privacy☆17Mar 3, 2023Updated 3 years ago
- Official implementation of Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective☆13Sep 4, 2024Updated last year
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆15Sep 18, 2023Updated 2 years ago
- Implementation of a client reputation, gradient checking and homomorphic encryption mechanism to defend a federated learning system from …☆17Jan 11, 2024Updated 2 years ago
- TextGuard: Provable Defense against Backdoor Attacks on Text Classification☆14Nov 7, 2023Updated 2 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- 💉🔐 Novel algorithm for defending against Data Poisoning Attacks in a Federated Learning scenario☆24Apr 22, 2024Updated last year
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Apr 1, 2021Updated 4 years ago
- The source code of the paper "Efficient Privacy-Preserving Federated Learning with Compressed Sensing"☆22May 23, 2024Updated last year
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆44Oct 29, 2021Updated 4 years ago
- Adversarial attacks and defenses against federated learning.☆20May 24, 2023Updated 2 years ago
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆16Jan 18, 2025Updated last year
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- MartianBank is a microservices demo application that allows customers to access and manage their bank accounts, perform financial transac…☆33Jun 26, 2024Updated last year
- Backdoor detection in Federated learning with similarity measurement☆26Apr 30, 2022Updated 3 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 2 years ago
- [Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federa…☆47Sep 10, 2025Updated 6 months ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- Code release for Tackling Data Heterogeneity in Federated Learning with Class Prototypes appeared on AAAI2023.☆47Feb 16, 2023Updated 3 years ago
- 天池比赛-铝型材表面瑕疵识别的实现;Pytorch;finetune renet;☆15Oct 27, 2018Updated 7 years ago
- ☆41Feb 7, 2024Updated 2 years ago
- Code for Neural Networks journal paper - StoCFL: A stochastically clustered federated learning framework for Non-IID data with dynamic cl…☆12Apr 28, 2024Updated last year
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- Blockchain-based Federated Learning Utilizing Zero-Knowledge Proofs for Verifiable Training and Aggregation☆14Dec 26, 2024Updated last year
- Official code for "Federated learning for heterogeneous electronic health record systems with cost effective participant selection"☆12Feb 11, 2026Updated last month
- KNN Defense Against Clean Label Poisoning Attacks☆13Sep 24, 2021Updated 4 years ago
- ☆13Apr 9, 2021Updated 4 years ago
- A PyTorch implementation of ClipPrompt based on CVPR 2023 paper "CLIP for All Things Zero-Shot Sketch-Based Image Retrieval, Fine-Grained…☆18Nov 5, 2023Updated 2 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆40Sep 10, 2025Updated 6 months ago
- Code for "Lightweight Blockchain-Empowered Secure and Efficient Federated Edge Learning", IEEE Transactions on Computers☆13Mar 12, 2026Updated last week
- ☆17Jun 25, 2024Updated last year
- randomized SVD with single pass over data matrix☆10Apr 23, 2023Updated 2 years ago