GuanZihan / GNN_backdoor_detection
Implementation of XGBD: Explanation-Guided Backdoor Detection on Graphs
☆11Updated 2 years ago
Alternatives and similar repositories for GNN_backdoor_detection:
Users that are interested in GNN_backdoor_detection are comparing it to the libraries listed below
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆56Updated last year
- ☆53Updated 2 years ago
- Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"☆21Updated last year
- A PyTorch implementation of "Backdoor Attacks to Graph Neural Networks" (SACMAT'21)☆36Updated 3 years ago
- ☆31Updated 2 years ago
- ☆27Updated last year
- ☆15Updated 3 years ago
- A list of recent adversarial attack and defense papers (including those on large language models)☆29Updated last week
- ☆31Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆18Updated last year
- ☆11Updated 4 months ago
- ☆24Updated last year
- Official implementation of "Graph Unlearning" (ACM CCS 2022)☆42Updated last year
- This is the source code for Data-free Backdoor. Our paper is accepted by the 32nd USENIX Security Symposium (USENIX Security 2023).☆31Updated last year
- [IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, …☆22Updated 3 years ago
- Implementation of paper "Explanability-based backdoor attacks against graph neural networks"☆11Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆68Updated 11 months ago
- Code for ML Doctor☆85Updated 5 months ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆51Updated last month
- ☆14Updated 3 years ago
- Model Poisoning Attack to Federated Recommendation☆32Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆30Updated 5 months ago
- [AAAI 2023] Official PyTorch implementation for "Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings…☆21Updated 2 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆45Updated last year
- This repository aims to provide links to works about privacy attacks and privacy preservation on graph data with Graph Neural Networks (G…☆22Updated last year
- ☆13Updated 7 months ago
- ☆24Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆47Updated 2 years ago
- Backdoor detection in Federated learning with similarity measurement☆22Updated 2 years ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆97Updated 2 weeks ago