A PyTorch implementation of "Backdoor Attacks to Graph Neural Networks" (SACMAT'21)
☆43Sep 18, 2021Updated 4 years ago
Alternatives and similar repositories for graphbackdoor
Users that are interested in graphbackdoor are comparing it to the libraries listed below
Sorting:
- ☆57Oct 5, 2022Updated 3 years ago
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆60Dec 3, 2023Updated 2 years ago
- [ICLR 2022] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability☆38Nov 27, 2023Updated 2 years ago
- Official repository for AAAI'23 paper: Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinf…☆30Nov 26, 2022Updated 3 years ago
- Official Pytorch implementation of IJCAI'21 paper "GraphMI: Extracting Private Graph Data from Graph Neural Networks"☆13Nov 19, 2021Updated 4 years ago
- [TMLR] Unsupervised Network Embedding Beyond Homophily (https://arxiv.org/abs/2203.10866) Resources☆11Mar 21, 2023Updated 2 years ago
- Code for Towards More Practical Adversarial Attacks on Graph Neural Networks (NeurIPS 2020)☆28Nov 13, 2021Updated 4 years ago
- This repository contains the official implementation of the paper "Robustness of Graph Neural Networks at Scale" (NeurIPS, 2021).☆31Jul 25, 2023Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆34Aug 10, 2024Updated last year
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆41Apr 3, 2023Updated 2 years ago
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆13Jul 13, 2020Updated 5 years ago
- The core code for our paper "Beyond Traditional Threats: A Persistent Backdoor Attack on Federated Learning".☆21Dec 25, 2023Updated 2 years ago
- This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation …☆21Nov 9, 2024Updated last year
- Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".☆20Feb 26, 2020Updated 6 years ago
- Official Implementation of Half-Hop☆20Oct 10, 2023Updated 2 years ago
- code for paper TDGIA:Effective Injection Attacks on Graph Neural Networks (KDD 2021, research track)☆22Nov 5, 2021Updated 4 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- Graph Injection Adversarial Attack & Defense Dataset , extracted from KDD CUP 2020 ML2 Track☆22Aug 19, 2024Updated last year
- Codebase for the ICKG 2023 paper: "GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection".☆22Feb 16, 2024Updated 2 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆24Jun 14, 2024Updated last year
- ☆22Nov 4, 2017Updated 8 years ago
- Official implementation of AAAI'22 paper "ProtGNN: Towards Self-Explaining Graph Neural Networks"☆50Oct 25, 2022Updated 3 years ago
- Source code for the paper "Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data"☆20Feb 24, 2024Updated 2 years ago
- Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"☆24May 3, 2023Updated 2 years ago
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆25Mar 13, 2023Updated 2 years ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆37Aug 7, 2025Updated 6 months ago
- ☆23Aug 24, 2020Updated 5 years ago
- ☆10Nov 22, 2022Updated 3 years ago
- This is the official implementation of our paper 'Black-box Dataset Ownership Verification via Backdoor Watermarking'.☆26Jul 22, 2023Updated 2 years ago
- Adversarial attacks and defenses on Graph Neural Networks.☆391Feb 22, 2024Updated 2 years ago
- Implementation of the paper "Adversarial Attacks on Graph Neural Networks via Meta Learning".☆154Dec 9, 2021Updated 4 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- ☆24Aug 9, 2022Updated 3 years ago
- Data and code for "DECOR: Degree-Corrected Social Graph Refinement for Fake News Detection" (KDD 2023)☆25Aug 4, 2023Updated 2 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆32Jul 11, 2022Updated 3 years ago
- A pytorch adversarial library for attack and defense methods on images and graphs☆1,079Jun 26, 2025Updated 8 months ago
- Pytorch implementation of Backdoor Attack against Speaker Verification☆28Sep 19, 2023Updated 2 years ago
- Codes for Dual Stealthy Backdoor☆14Feb 10, 2024Updated 2 years ago