thinh-dao / BackFedLinks
An Efficient & Standardized Benchmark Suite for Backdoor Attacks in Federated Learning
☆48Updated 2 months ago
Alternatives and similar repositories for BackFed
Users that are interested in BackFed are comparing it to the libraries listed below
Sorting:
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆40Updated 5 months ago
- ☆370Updated last month
- Fast integration of backdoor attacks in federated learning with updated attacks and defenses.☆59Updated 3 weeks ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Updated 5 months ago
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆52Updated 4 months ago
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆108Updated last year
- ☆54Updated 2 years ago
- ☆37Updated 2 years ago
- ☆580Updated 7 months ago
- Composite Backdoor Attacks Against Large Language Models☆22Updated last year
- Backdoor Stuff in AI/ ML domain☆33Updated 2 weeks ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆235Updated last year
- [TDSC 2024] Official code for our paper "FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model"☆22Updated 8 months ago
- An Empirical Study of Federated Unlearning: Efficiency and Effectiveness (Accepted Conference Track Papers at ACML 2023)☆18Updated 2 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆286Updated last year
- ✨✨A curated list of latest advances on Large Foundation Models with Federated Learning☆151Updated 2 weeks ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆215Updated 8 months ago
- Code for the NeurIPS 2024 submission: "DAGER: Extracting Text from Gradients with Language Model Priors"☆20Updated 5 months ago
- ☆31Updated 2 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆52Updated last year
- A resource repository for machine unlearning in large language models☆534Updated last month
- The implementation of the IEEE S&P 2024 paper MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Us…☆16Updated last year
- Code for Data Poisoning Attacks Against Federated Learning Systems☆206Updated 4 years ago
- ☆26Updated last year
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆86Updated 2 years ago
- Code implementation of the paper "Federated Unlearning: How to Efficiently Erase a Client in FL?" published at UpML (part of ICML 2022)☆45Updated 4 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Updated 2 years ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆332Updated 2 months ago