zhxchd / Blink_GNN
Code for CCS '23 paper "Blink: Link Local Differential Privacy in Graph Neural Networks via Bayesian Estimation"
☆10Updated last year
Alternatives and similar repositories for Blink_GNN:
Users that are interested in Blink_GNN are comparing it to the libraries listed below
- PyTorch implementation of a number of mechanisms in local differential privacy☆15Updated 3 years ago
- Locally Private Graph Neural Networks (ACM CCS 2021)☆45Updated last year
- Local Differential Privacy for Federated Learning☆16Updated 2 years ago
- 基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型☆62Updated 4 years ago
- Implementations of differentially private release mechanisms for graph statistics☆21Updated 2 years ago
- Implementation of "PrivGraph: Differentially Private Graph Data Publication by Exploiting Community Information"☆12Updated last year
- ☆14Updated 3 years ago
- ☆18Updated last year
- The Algorithmic Foundations of Differential Pivacy by Cynthia Dwork Chinese Translation☆161Updated 2 years ago
- ☆24Updated 3 years ago
- ☆13Updated 8 months ago
- ☆14Updated this week
- ☆38Updated 3 years ago
- Implementation of paper "More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks"☆21Updated last year
- ☆25Updated last year
- GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation (USENIX Security '23)☆45Updated last year
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆82Updated last year
- Concentrated Differentially Private Gradient Descent with Adaptive per-iteration Privacy Budget☆49Updated 6 years ago
- ☆36Updated 3 years ago
- A list of papers using/about Federated Learning especially malicious client and attacks.☆12Updated 4 years ago
- Github Repo for AAAI 2023 paper: On the Vulnerability of Backdoor Defenses for Federated Learning☆35Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness (IJCAI'19).☆13Updated 3 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆74Updated last year
- ☆47Updated last year
- [AAAI'23] Federated Learning on Non-IID Graphs via Structural Knowledge Sharing☆61Updated 2 years ago
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆44Updated last year
- ☆37Updated last year
- Official implementation of "FL-WBC: Enhancing Robustness against Model Poisoning Attacks in Federated Learning from a Client Perspective"…☆39Updated 3 years ago
- ☆10Updated 2 months ago