xujing1994 / bkd_fedgnnLinks
Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"
☆24Updated 2 years ago
Alternatives and similar repositories for bkd_fedgnn
Users that are interested in bkd_fedgnn are comparing it to the libraries listed below
Sorting:
- ☆11Updated 3 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆49Updated 5 months ago
- ☆16Updated last year
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆84Updated 2 years ago
- ☆35Updated last year
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆40Updated 2 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆88Updated 2 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆21Updated 2 years ago
- ☆19Updated 5 years ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆38Updated 3 months ago
- ☆33Updated 2 years ago
- ☆30Updated 2 years ago
- ☆39Updated last year
- [AAAI'23] Federated Learning on Non-IID Graphs via Structural Knowledge Sharing☆69Updated 3 years ago
- ☆45Updated 2 years ago
- ☆25Updated 3 weeks ago
- ☆35Updated 4 years ago
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Updated 2 years ago
- ☆25Updated 4 years ago
- ☆10Updated 3 years ago
- ☆58Updated 3 years ago
- Local Differential Privacy for Federated Learning☆18Updated 3 years ago
- Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)☆83Updated 4 years ago
- A sybil-resilient distributed learning protocol.☆106Updated 3 months ago
- ☆38Updated 4 years ago
- ☆55Updated 2 years ago
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆61Updated 2 years ago
- The core code for our paper "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning".☆21Updated last year
- Code for NDSS 2021 Paper "Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses Against Federated Learning"☆148Updated 3 years ago
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆43Updated 2 months ago