vasishtduddu / GraphLeaks
Code for the paper "Quantifying Privacy Leakage in Graph Embedding" published in MobiQuitous 2020
☆15Updated 3 years ago
Alternatives and similar repositories for GraphLeaks:
Users that are interested in GraphLeaks are comparing it to the libraries listed below
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆47Updated last year
- ☆39Updated last year
- [ICLR 2022] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability☆37Updated last year
- ☆22Updated 2 years ago
- Implementation of Adversarial Privacy Graph Embedding in TensorFlow☆19Updated 4 years ago
- G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)☆28Updated 3 years ago
- Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, WWW22☆17Updated 2 years ago
- ☆22Updated 2 years ago
- The code for paper "Cross Platforms Linguals and Models Social Bot Detection via Federated Adversarial Contrastive Knowledge Distillation…☆18Updated last year
- Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem☆18Updated 3 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆45Updated last year
- Official Pytorch implementation of IJCAI'21 paper "GraphMI: Extracting Private Graph Data from Graph Neural Networks"☆13Updated 3 years ago
- [AAAI 2023] Official PyTorch implementation for "Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings…☆21Updated 2 years ago
- Adaptive evaluation reveals that most examined adversarial defenses for GNNs show no or only marginal improvement in robustness. (NeurIPS…☆29Updated 2 years ago
- The code of paper "Adversarial Label-Flipping Attack and Defense for Graph Neural Networks" (ICDM 2020)☆17Updated 4 years ago
- Code for CIKM 2021 paper: Differentially Private Federated Knowledge Graphs Embedding (https://arxiv.org/abs/2105.07615)☆31Updated 2 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆17Updated 2 years ago
- Official Code Repository for the paper - Personalized Subgraph Federated Learning (ICML 2023)☆47Updated last year
- [WWW2022] Geometric Graph Representation Learning via Maximizing Rate Reduction☆26Updated 2 years ago
- Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".☆20Updated 5 years ago
- Code for ICLR'2021 paper: On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections☆12Updated 3 years ago
- ☆32Updated 3 years ago
- [IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, …☆22Updated 3 years ago
- This repository contains the official implementation of the paper "Robustness of Graph Neural Networks at Scale" (NeurIPS, 2021).☆30Updated last year
- Defending graph neural networks against adversarial attacks (NeurIPS 2020)☆63Updated last year
- DP-GNN design that ensures both model weights and inference procedure differentially private (NeurIPS 2023)☆12Updated last year
- Pytorch implementation of gnn meta attack (mettack). Paper title: Adversarial Attacks on Graph Neural Networks via Meta Learning.☆21Updated 4 years ago
- Official implementation of "Graph Unlearning" (ACM CCS 2022)☆45Updated 2 years ago
- This repository aims to provide links to works about privacy attacks and privacy preservation on graph data with Graph Neural Networks (G…☆23Updated last year
- ☆27Updated 2 years ago