thinh-dao / BackFedLinks
An Efficient & Standardized Benchmark Suite for Backdoor Attacks in Federated Learning
☆47Updated last month
Alternatives and similar repositories for BackFed
Users that are interested in BackFed are comparing it to the libraries listed below
Sorting:
- Fast integration of backdoor attacks in machine learning and federated learning.☆57Updated 2 years ago
- ☆369Updated last week
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆39Updated 4 months ago
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆44Updated 3 months ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Updated 4 months ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆282Updated last year
- ☆55Updated 2 years ago
- An Empirical Study of Federated Unlearning: Efficiency and Effectiveness (Accepted Conference Track Papers at ACML 2023)☆18Updated 2 years ago
- Backdoor Stuff in AI/ ML domain☆34Updated this week
- The implementation of the IEEE S&P 2024 paper MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Us…☆17Updated last year
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆108Updated last year
- Composite Backdoor Attacks Against Large Language Models☆21Updated last year
- ✨✨A curated list of latest advances on Large Foundation Models with Federated Learning☆145Updated 3 weeks ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆213Updated 7 months ago
- ☆36Updated last year
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆231Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- ☆573Updated 6 months ago
- ☆30Updated 2 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- Code for Data Poisoning Attacks Against Federated Learning Systems☆205Updated 4 years ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆51Updated last year
- ☆195Updated 2 years ago
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆59Updated 10 months ago
- (ACL 2025 - Oral) FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models☆30Updated 3 months ago
- ☆25Updated last month
- Code implementation of the paper "Federated Unlearning: How to Efficiently Erase a Client in FL?" published at UpML (part of ICML 2022)☆45Updated 3 months ago
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆14Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆80Updated 2 years ago