zhxchd / Blink_GNNLinks
Code for CCS '23 paper "Blink: Link Local Differential Privacy in Graph Neural Networks via Bayesian Estimation"
☆14Updated last year
Alternatives and similar repositories for Blink_GNN
Users that are interested in Blink_GNN are comparing it to the libraries listed below
Sorting:
- Locally Private Graph Neural Networks (ACM CCS 2021)☆47Updated last year
- PyTorch implementation of a number of mechanisms in local differential privacy☆17Updated 3 years ago
- Local Differential Privacy for Federated Learning☆16Updated 2 years ago
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆49Updated last year
- Implementations of differentially private release mechanisms for graph statistics☆24Updated 3 years ago
- ☆14Updated last year
- Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"☆23Updated 2 years ago
- Code for the CCS'22 paper "Federated Boosted Decision Trees with Differential Privacy"☆46Updated last year
- ☆35Updated 3 years ago
- Implementation of "PrivGraph: Differentially Private Graph Data Publication by Exploiting Community Information"☆13Updated 2 years ago
- ☆25Updated 3 years ago
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆49Updated 7 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆19Updated 8 months ago
- ☆11Updated 9 months ago
- [IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, …☆23Updated 3 years ago
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 4 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆62Updated 8 months ago
- ☆38Updated 4 years ago
- ☆10Updated 3 years ago
- ☆14Updated 4 years ago
- ☆23Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆22Updated 4 years ago
- Official Code for FedRule: Federated Rule Recommendation System with Graph Neural Networks☆13Updated last year
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆19Updated last year
- ☆30Updated last year
- Official Implementation of "Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning"☆9Updated 4 months ago
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Updated last year
- ☆41Updated last year
- ☆18Updated 4 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Updated 5 years ago