ssg-research / WAFFLE
WAFFLE: Watermarking in Federated Learning
☆15Updated last year
Related projects: ⓘ
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- ☆31Updated 4 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆69Updated 3 years ago
- ☆45Updated 4 years ago
- ☆21Updated 2 years ago
- Webank AI☆36Updated last year
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆32Updated 3 years ago
- Privacy attacks on Split Learning☆37Updated 2 years ago
- ☆22Updated 3 years ago
- ☆38Updated 3 years ago
- simple Differential Privacy in PyTorch☆48Updated 4 years ago
- DETOX: A Redundancy-based Framework for Faster and More Robust Gradient Aggregation☆16Updated 4 years ago
- ☆63Updated 2 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆9Updated last year
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 5 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆54Updated last year
- Code to accompany the paper "Deep Learning with Gaussian Differential Privacy"☆31Updated 3 years ago
- privacy preserving deep learning☆15Updated 7 years ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆29Updated 2 years ago
- ☆44Updated 3 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆13Updated 4 years ago
- Pytorch implementation of paper Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data (https://arxiv.org/abs/16…☆41Updated 2 years ago
- Code for ML Doctor☆84Updated last month
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 4 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆31Updated 2 years ago
- ☆33Updated last year
- ☆50Updated last year
- Official code repository for our accepted work "Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning" in NeurI…☆21Updated 11 months ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆52Updated last year
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆61Updated last year