xinleihe / link_stealing_attack
☆14Updated 3 years ago
Alternatives and similar repositories for link_stealing_attack:
Users that are interested in link_stealing_attack are comparing it to the libraries listed below
- Locally Private Graph Neural Networks (ACM CCS 2021)☆45Updated last year
- [IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, …☆22Updated 3 years ago
- ☆27Updated last year
- Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"☆22Updated last year
- ☆53Updated 2 years ago
- Implementation of "PrivGraph: Differentially Private Graph Data Publication by Exploiting Community Information"☆12Updated last year
- ☆32Updated 3 years ago
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆45Updated last year
- Implementations of differentially private release mechanisms for graph statistics☆21Updated 2 years ago
- This repository aims to provide links to works about privacy attacks and privacy preservation on graph data with Graph Neural Networks (G…☆23Updated last year
- A PyTorch implementation of "Backdoor Attacks to Graph Neural Networks" (SACMAT'21)☆37Updated 3 years ago
- ☆16Updated 3 years ago
- Implementation of Adversarial Privacy Graph Embedding in TensorFlow☆19Updated 4 years ago
- An official PyTorch implementation of "Unnoticeable Backdoor Attacks on Graph Neural Networks" (WWW 2023)☆57Updated last year
- Official implementation of "Graph Unlearning" (ACM CCS 2022)☆43Updated last year
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆17Updated 4 months ago
- ☆10Updated 3 years ago
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 3 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆58Updated 4 months ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆26Updated last year
- ☆10Updated 3 years ago
- ☆31Updated last year
- ☆11Updated 2 years ago
- ☆24Updated 3 years ago
- ☆18Updated 4 years ago
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Updated 4 years ago
- Secure and utility-aware data collection with condensed local differential privacy☆16Updated 4 years ago
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Updated 6 years ago
- ☆14Updated last week
- ☆36Updated 3 years ago