xujing1994 / bkd_fedgnnLinks
Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"
☆23Updated 2 years ago
Alternatives and similar repositories for bkd_fedgnn
Users that are interested in bkd_fedgnn are comparing it to the libraries listed below
Sorting:
- ☆11Updated 3 years ago
- ☆16Updated last year
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆87Updated 2 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Updated 2 years ago
- ☆36Updated last year
- ☆33Updated 2 years ago
- ☆19Updated 5 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆21Updated 2 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆49Updated 6 months ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Updated last year
- ☆10Updated 3 years ago
- ☆25Updated 4 years ago
- Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)☆83Updated 4 years ago
- ☆55Updated 2 years ago
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆61Updated 2 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆39Updated 4 months ago
- ☆56Updated 3 years ago
- ☆39Updated last year
- ☆25Updated last month
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Updated 5 years ago
- ☆30Updated 2 years ago
- ☆35Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆84Updated 2 years ago
- [AAAI'23] Federated Learning on Non-IID Graphs via Structural Knowledge Sharing☆69Updated 3 years ago
- The core code for our paper "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning".☆21Updated 2 years ago
- ☆12Updated last year
- ☆38Updated 4 years ago
- ☆14Updated 4 years ago
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆148Updated 3 years ago
- Local Differential Privacy for Federated Learning☆19Updated 3 years ago