[Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning" (https://www.usenix.org/conference/usenixsecurity24/presentation/li-songze)
☆47Sep 10, 2025Updated 6 months ago
Alternatives and similar repositories for Backdoor-indicator-defense
Users that are interested in Backdoor-indicator-defense are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆54Jun 30, 2023Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Sep 9, 2025Updated 6 months ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 2 years ago
- WAFFLE: Watermarking in Federated Learning☆23Aug 21, 2023Updated 2 years ago
- ☆15Dec 7, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- reproduce the FLTrust model based on the paper "FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping"☆35Dec 4, 2022Updated 3 years ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).☆13Oct 25, 2024Updated last year
- FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).☆15Jul 6, 2024Updated last year
- MCSI☆13Nov 2, 2021Updated 4 years ago
- ☆46Aug 4, 2023Updated 2 years ago
- Official implementation of Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective☆13Sep 4, 2024Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Nov 5, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆18May 24, 2025Updated 10 months ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆40Sep 10, 2025Updated 6 months ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆28Sep 11, 2024Updated last year
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆54Dec 11, 2024Updated last year
- ☆73Jun 7, 2022Updated 3 years ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Jun 13, 2021Updated 4 years ago
- 用python画出一幅标准的五星红旗,庆祝国庆节!☆11Oct 14, 2022Updated 3 years ago
- [ICML2022] ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training☆23Oct 17, 2022Updated 3 years ago
- Fast integration of backdoor attacks in federated learning with updated attacks and defenses.☆62Jan 19, 2026Updated 2 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- ☆31Oct 10, 2023Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Apr 28, 2025Updated 11 months ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆37Oct 3, 2022Updated 3 years ago
- ☆16Feb 28, 2023Updated 3 years ago
- ☆38Apr 9, 2021Updated 4 years ago
- ☆19Dec 7, 2020Updated 5 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆24Jun 14, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- An open source FL implement with dataset(Femnist, Shakespeare, MNIST, Cifar-10 and Fashion-Mnist) using pytorch☆132May 20, 2023Updated 2 years ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Jul 7, 2022Updated 3 years ago
- ☆31Apr 8, 2020Updated 5 years ago
- ☆12Oct 26, 2023Updated 2 years ago
- The core code for our paper "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning".☆23Dec 25, 2023Updated 2 years ago
- ☆24Nov 11, 2022Updated 3 years ago
- SampDetox: Black-box Backdoor Defense via Perturbation-based Sample Detoxification☆14Jun 10, 2025Updated 9 months ago