thinh-dao / BackFedLinks
An Efficient & Standardized Benchmark Suite for Backdoor Attacks in Federated Learning
☆34Updated 2 weeks ago
Alternatives and similar repositories for BackFed
Users that are interested in BackFed are comparing it to the libraries listed below
Sorting:
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆36Updated 2 weeks ago
- ☆355Updated 2 months ago
- Code for the NeurIPS 2024 submission: "DAGER: Extracting Text from Gradients with Language Model Priors"☆14Updated last month
- Fast integration of backdoor attacks in machine learning and federated learning.☆56Updated last year
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆33Updated this week
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆200Updated 3 months ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆42Updated 2 weeks ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆269Updated 8 months ago
- Papers related to Federated Learning in all top venues☆45Updated last week
- Backdoor Stuff in AI/ ML domain☆29Updated last week
- An Empirical Study of Federated Unlearning: Efficiency and Effectiveness (Accepted Conference Track Papers at ACML 2023)☆17Updated last year
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆221Updated last year
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆98Updated last year
- ✨✨A curated list of latest advances on Large Foundation Models with Federated Learning☆132Updated 3 months ago
- Composite Backdoor Attacks Against Large Language Models☆17Updated last year
- ☆27Updated last year
- ☆33Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆61Updated 9 months ago
- Methods for removing learned data from neural nets and evaluation of those methods☆37Updated 4 years ago
- ☆53Updated 2 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆18Updated last year
- The implementation of the IEEE S&P 2024 paper MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Us…☆14Updated last year
- (ACL Oral 2025) FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models☆23Updated 3 months ago
- Code implementation of the paper "Federated Unlearning: How to Efficiently Erase a Client in FL?" published at UpML (part of ICML 2022)☆40Updated last week
- ☆541Updated 2 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆74Updated 2 years ago
- ☆17Updated 3 years ago
- A curated repository for various papers in the domain of split learning.☆54Updated last year
- Code for Data Poisoning Attacks Against Federated Learning Systems☆200Updated 4 years ago