TrustworthyGNN / MIA-GNNLinks
Membership Inference Attack against Graph Neural Networks
☆12Updated 2 years ago
Alternatives and similar repositories for MIA-GNN
Users that are interested in MIA-GNN are comparing it to the libraries listed below
Sorting:
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆16Updated 2 years ago
- ☆32Updated 3 years ago
- ☆56Updated 2 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆48Updated 3 years ago
- ☆28Updated 2 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆47Updated last year
- [IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, …☆23Updated 3 years ago
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆58Updated last year
- ☆11Updated 8 months ago
- The code for our Updates-Leak paper☆16Updated 4 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆52Updated 3 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- Code for ML Doctor☆89Updated 9 months ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)☆48Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Updated 3 years ago
- ☆14Updated 4 years ago
- Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"☆23Updated 2 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆62Updated 8 months ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆72Updated last year
- ☆69Updated 3 years ago
- ☆17Updated 3 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆65Updated 3 years ago
- ☆23Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆73Updated 2 years ago
- ☆31Updated 9 months ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆60Updated 6 years ago
- ☆25Updated 3 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated last year
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆56Updated 2 years ago