[ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
☆60Dec 11, 2024Updated last year
Alternatives and similar repositories for FLIP
Users that are interested in FLIP are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)☆31Jan 1, 2024Updated 2 years ago
- Backdoor detection in Federated learning with similarity measurement☆26Apr 30, 2022Updated 3 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 2 years ago
- [Oakland 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆28Apr 15, 2025Updated 11 months ago
- ☆18Jun 10, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ECCV'24] UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening☆10Dec 18, 2025Updated 3 months ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆39Aug 7, 2025Updated 7 months ago
- ☆20Feb 11, 2024Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- ☆16Dec 29, 2023Updated 2 years ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆37Oct 3, 2022Updated 3 years ago
- A toolbox for backdoor attacks.☆23Jan 13, 2023Updated 3 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆40Sep 10, 2025Updated 6 months ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- ☆55Feb 19, 2023Updated 3 years ago
- [IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models☆21Oct 5, 2025Updated 5 months ago
- [NDSS'23] BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense☆17May 7, 2024Updated last year
- FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).☆15Jul 6, 2024Updated last year
- ☆14May 17, 2024Updated last year
- ☆54Jun 30, 2023Updated 2 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Dec 31, 2024Updated last year
- ☆25Nov 13, 2025Updated 4 months ago
- ☆16Sep 4, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Official repository for CVPR'23 paper: Detecting Backdoors in Pre-trained Encoders☆36Sep 25, 2023Updated 2 years ago
- Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).☆13Oct 25, 2024Updated last year
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆15Sep 18, 2023Updated 2 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 6 months ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆314Jul 25, 2024Updated last year
- ☆12May 27, 2022Updated 3 years ago
- ☆10Oct 31, 2022Updated 3 years ago
- [CVPR'24] LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning☆15Jan 15, 2025Updated last year
- A PyTorch based repository for Federate Learning with Differential Privacy☆18Mar 3, 2023Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆44Oct 29, 2021Updated 4 years ago
- ☆73Jun 7, 2022Updated 3 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Apr 5, 2022Updated 3 years ago
- Official implementation of Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective☆13Sep 4, 2024Updated last year
- ☆12Jan 28, 2023Updated 3 years ago
- [Usenix Security 2024] Official code implementation of "BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federa…☆47Sep 10, 2025Updated 6 months ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago