vasishtduddu / GraphLeaksLinks
Code for the paper "Quantifying Privacy Leakage in Graph Embedding" published in MobiQuitous 2020
☆15Updated 3 years ago
Alternatives and similar repositories for GraphLeaks
Users that are interested in GraphLeaks are comparing it to the libraries listed below
Sorting:
- [ICLR 2022] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability☆38Updated last year
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆48Updated last year
- ☆41Updated last year
- The code of paper "Adversarial Label-Flipping Attack and Defense for Graph Neural Networks" (ICDM 2020)☆18Updated 4 years ago
- ☆25Updated 2 years ago
- Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, WWW22☆17Updated 2 years ago
- G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)☆30Updated 3 years ago
- Official Pytorch implementation of IJCAI'21 paper "GraphMI: Extracting Private Graph Data from Graph Neural Networks"☆13Updated 3 years ago
- Adaptive evaluation reveals that most examined adversarial defenses for GNNs show no or only marginal improvement in robustness. (NeurIPS…☆29Updated 2 years ago
- Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem☆18Updated 3 years ago
- Implementation of Adversarial Privacy Graph Embedding in TensorFlow☆20Updated 4 years ago
- ☆32Updated 3 years ago
- This repository contains the official implementation of the paper "Robustness of Graph Neural Networks at Scale" (NeurIPS, 2021).☆30Updated last year
- ☆22Updated 2 years ago
- DP-GNN design that ensures both model weights and inference procedure differentially private (NeurIPS 2023)☆12Updated last year
- The code for paper "Cross Platforms Linguals and Models Social Bot Detection via Federated Adversarial Contrastive Knowledge Distillation…☆19Updated 2 years ago
- [IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, …☆23Updated 3 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆16Updated 3 years ago
- Defending graph neural networks against adversarial attacks (NeurIPS 2020)☆64Updated last year
- Code for CIKM 2021 paper: Differentially Private Federated Knowledge Graphs Embedding (https://arxiv.org/abs/2105.07615)☆33Updated 2 years ago
- Official Code Repository for the paper - Personalized Subgraph Federated Learning (ICML 2023)☆48Updated last year
- Locally Private Graph Neural Networks (ACM CCS 2021)☆47Updated last year
- A PyTorch implementation of "Backdoor Attacks to Graph Neural Networks" (SACMAT'21)☆39Updated 3 years ago
- [AAAI 2023] Official PyTorch implementation for "Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings…☆22Updated 2 years ago
- Pytorch implementation of gnn meta attack (mettack). Paper title: Adversarial Attacks on Graph Neural Networks via Meta Learning.☆21Updated 4 years ago
- Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization (NeurIPS 21')☆23Updated 3 years ago
- ☆17Updated 3 years ago
- ☆56Updated 2 years ago
- Model Poisoning Attack to Federated Recommendation☆31Updated 3 years ago
- [WWW2022] Geometric Graph Representation Learning via Maximizing Rate Reduction☆26Updated 3 years ago