ssg-research / WAFFLE
WAFFLE: Watermarking in Federated Learning
☆19Updated last year
Alternatives and similar repositories for WAFFLE:
Users that are interested in WAFFLE are comparing it to the libraries listed below
- Webank AI☆42Updated last month
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- ☆45Updated 4 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆33Updated 3 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆71Updated 3 years ago
- ☆25Updated 3 years ago
- ☆31Updated 4 years ago
- Code for ML Doctor☆88Updated 7 months ago
- ☆54Updated 2 years ago
- ☆45Updated 5 years ago
- ☆21Updated 3 years ago
- privacy preserving deep learning☆15Updated 7 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆53Updated 3 months ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- ☆28Updated 2 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆83Updated 2 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆58Updated 5 months ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆70Updated 2 years ago
- ☆69Updated 2 years ago
- ☆23Updated 2 years ago
- The code of the attack scheme in the paper "Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning"☆17Updated last year
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆30Updated 2 years ago
- ☆35Updated 3 years ago
- ☆33Updated last year
- Privacy attacks on Split Learning☆38Updated 3 years ago
- Multi-metrics adaptively identifies backdoors in Federated learning☆24Updated last year
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- ☆26Updated 6 years ago
- Privacy-Preserving Deep Learning via Additively Homomorphic Encryption☆68Updated 4 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago