vasishtduddu / GraphLeaksLinks
Code for the paper "Quantifying Privacy Leakage in Graph Embedding" published in MobiQuitous 2020
☆15Updated 3 years ago
Alternatives and similar repositories for GraphLeaks
Users that are interested in GraphLeaks are comparing it to the libraries listed below
Sorting:
- [ICLR 2022] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability☆38Updated last year
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆49Updated last year
- Adaptive evaluation reveals that most examined adversarial defenses for GNNs show no or only marginal improvement in robustness. (NeurIPS…☆29Updated 2 years ago
- ☆41Updated last year
- ☆22Updated 2 years ago
- DP-GNN design that ensures both model weights and inference procedure differentially private (NeurIPS 2023)☆12Updated last year
- Implementation of Adversarial Privacy Graph Embedding in TensorFlow☆20Updated 5 years ago
- ☆25Updated 2 years ago
- Official Code Repository for the paper - Personalized Subgraph Federated Learning (ICML 2023)☆48Updated last year
- Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem☆18Updated 3 years ago
- Implementation of the paper "Certifiable Robustness and Robust Training for Graph Convolutional Networks".☆43Updated 4 years ago
- This repository contains the official implementation of the paper "Robustness of Graph Neural Networks at Scale" (NeurIPS, 2021).☆30Updated last year
- G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)☆30Updated 3 years ago
- Official Pytorch implementation of IJCAI'21 paper "GraphMI: Extracting Private Graph Data from Graph Neural Networks"☆13Updated 3 years ago
- Code for CIKM 2021 paper: Differentially Private Federated Knowledge Graphs Embedding (https://arxiv.org/abs/2105.07615)☆33Updated 2 years ago
- ☆32Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 5 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- [AAAI 2023] Official PyTorch implementation for "Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings…☆22Updated 2 years ago
- ☆11Updated 2 years ago
- Unsupervised Graph Poisoning Attack via Contrastive Loss Back-propagation, WWW22☆17Updated 2 years ago
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆26Updated 2 years ago
- The code of paper "Adversarial Label-Flipping Attack and Defense for Graph Neural Networks" (ICDM 2020)☆18Updated 4 years ago
- ☆10Updated 4 years ago
- Pytorch implementation of gnn meta attack (mettack). Paper title: Adversarial Attacks on Graph Neural Networks via Meta Learning.☆21Updated 4 years ago
- ☆27Updated 2 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆56Updated 2 years ago
- Poisoning Deep Learning based Recommender Model in Federated Learning Scenarios☆16Updated 3 years ago
- Open source code for paper "EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks".☆27Updated 2 years ago
- ☆10Updated 3 years ago