GuanZihan / GNN_backdoor_detection
Implementation of XGBD: Explanation-Guided Backdoor Detection on Graphs
☆10Updated last year
Related projects: ⓘ
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆52Updated 9 months ago
- ☆51Updated last year
- A PyTorch implementation of "Backdoor Attacks to Graph Neural Networks" (SACMAT'21)☆32Updated 3 years ago
- Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"☆18Updated last year
- ☆31Updated 2 years ago
- Code release for DeepJudge (S&P'22)☆50Updated last year
- Machine Learning & Security Seminar @Purdue University☆25Updated last year
- Code for ML Doctor☆84Updated last month
- FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning [ICLR‘23, Best Paper Award at ECCV’22 AROW Workshop]☆42Updated last year
- ☆20Updated 11 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆165Updated 6 months ago
- ☆14Updated 2 years ago
- Official implementation of "Graph Unlearning" (ACM CCS 2022)☆37Updated last year
- Implementation of paper "Explanability-based backdoor attacks against graph neural networks"☆11Updated 2 years ago
- ☆60Updated 3 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆31Updated 9 months ago
- ☆10Updated 3 months ago
- ☆25Updated last year
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆54Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆66Updated 7 months ago
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 2 years ago
- ☆29Updated last year
- A toolbox for backdoor attacks.☆19Updated last year
- Webank AI☆36Updated last year
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆45Updated 2 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆13Updated 10 months ago
- ☆14Updated 3 years ago
- ☆20Updated last year
- Model Poisoning Attack to Federated Recommendation☆31Updated 2 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆45Updated last year