JunyiZhu-AI / surrogate_model_extension
Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]
☆12Updated last year
Alternatives and similar repositories for surrogate_model_extension:
Users that are interested in surrogate_model_extension are comparing it to the libraries listed below
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆33Updated 2 years ago
- ☆54Updated 2 years ago
- Official Implementation of ICML'23 "Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting".☆14Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆55Updated 4 months ago
- A PyTorch based repository for Federate Learning with Differential Privacy☆16Updated 2 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆21Updated 2 years ago
- Code for ICLR 2023 Paper Better Generative Replay for Continual Federated Learning☆27Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- ☆38Updated 4 years ago
- ☆68Updated 2 years ago
- Implementation of BapFL: You can Backdoor Attack Personalized Federated Learning☆13Updated last year
- ☆27Updated last year
- Personalized Federated Learning under Mixture of Distributions☆18Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆40Updated 3 months ago
- ☆33Updated 3 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆21Updated 10 months ago
- ☆16Updated last year
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆80Updated 2 years ago
- Backdoor detection in Federated learning with similarity measurement☆23Updated 2 years ago
- ☆25Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆57Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆73Updated 3 years ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆33Updated 4 months ago
- ☆10Updated 3 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆25Updated last year
- Pytorch implementation of backdoor unlearning.☆17Updated 2 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆14Updated 2 years ago
- ☆12Updated 2 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆18Updated last year