zhmzm / Poisoning_Backdoor-critical_Layers_Attack
[ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"
☆12Updated 8 months ago
Related projects: ⓘ
- ☆63Updated 2 years ago
- ☆22Updated 7 months ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆30Updated last year
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆30Updated last year
- Multi-metrics adaptively identifies backdoors in Federated learning☆22Updated 9 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆61Updated last year
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆21Updated 11 months ago
- FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning [ICLR‘23, Best Paper Award at ECCV’22 AROW Workshop]☆42Updated last year
- ☆39Updated 3 years ago
- ☆13Updated 10 months ago
- Backdoor detection in Federated learning with similarity measurement☆18Updated 2 years ago
- ☆20Updated 11 months ago
- Official Repository for ResSFL (accepted by CVPR '22)☆21Updated 2 years ago
- ☆36Updated last year
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆70Updated last year
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆13Updated 4 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆69Updated 3 years ago
- ☆50Updated last year
- ☆32Updated 2 years ago
- ☆36Updated 7 months ago
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆37Updated 2 years ago
- ☆22Updated 3 years ago
- Membership inference against Federated learning.☆7Updated 3 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆11Updated last year
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆55Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆31Updated 9 months ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆9Updated last year
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆77Updated last year
- Code and full version of the paper "Hijacking Attacks against Neural Network by Analyzing Training Data"☆10Updated 6 months ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆52Updated last year