thinh-dao / BackFedLinks
An Efficient & Standardized Benchmark Suite for Backdoor Attacks in Federated Learning
☆33Updated last week
Alternatives and similar repositories for BackFed
Users that are interested in BackFed are comparing it to the libraries listed below
Sorting:
- Fast integration of backdoor attacks in machine learning and federated learning.☆56Updated last year
- ☆342Updated last month
- ✨✨A curated list of latest advances on Large Foundation Models with Federated Learning☆121Updated last month
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆36Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆184Updated 2 months ago
- Papers related to Federated Learning in all top venues☆39Updated last week
- An Empirical Study of Federated Unlearning: Efficiency and Effectiveness (Accepted Conference Track Papers at ACML 2023)☆17Updated last year
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆97Updated last year
- Existing Literature about Machine Unlearning☆890Updated last month
- Awesome Machine Unlearning (A Survey of Machine Unlearning)☆862Updated this week
- Code implementation of the paper "Federated Unlearning: How to Efficiently Erase a Client in FL?" published at UpML (part of ICML 2022)☆40Updated 3 months ago
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆29Updated 5 months ago
- ☆52Updated 2 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆266Updated 6 months ago
- (ACL Oral 2025) FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models☆21Updated last month
- TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients r…☆10Updated 8 months ago
- Breaching privacy in federated learning scenarios for vision and text☆300Updated last year
- Algorithms to recover input data from their gradient signal through a neural network☆297Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆42Updated 7 months ago
- A resource repository for machine unlearning in large language models☆448Updated 2 weeks ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆216Updated last year
- ☆31Updated last year
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆17Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- Backdoor Stuff in AI/ ML domain☆27Updated last week
- ☆27Updated last year
- The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods. All features: …☆339Updated 2 weeks ago
- Code for Data Poisoning Attacks Against Federated Learning Systems☆196Updated 4 years ago
- A curated repository for various papers in the domain of split learning.☆54Updated 11 months ago
- This repository contains the official implementation for the manuscript: Make Landscape Flatter in Differentially Private Federated Lear…☆51Updated last year