cleverhans-lab / verifiable-unlearning
☆15Updated last year
Related projects ⓘ
Alternatives and complementary repositories for verifiable-unlearning
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- ☆53Updated last year
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 2 years ago
- ☆34Updated 2 years ago
- ☆28Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- ☆38Updated 3 years ago
- ☆23Updated 3 years ago
- Code for ML Doctor☆86Updated 2 months ago
- Learning from history for Byzantine Robustness☆21Updated 3 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆12Updated last year
- ☆23Updated 2 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆70Updated 3 years ago
- ☆39Updated 3 years ago
- ☆45Updated 5 years ago
- ☆10Updated last year
- ☆31Updated 4 years ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆57Updated last month
- ☆28Updated last year
- ☆64Updated 2 years ago
- Privacy attacks on Split Learning☆37Updated 2 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆16Updated 4 months ago
- ☆13Updated last year
- Membership inference against Federated learning.☆8Updated 3 years ago
- ☆15Updated last year
- reveal the vulnerabilities of SplitNN☆30Updated 2 years ago
- verifying machine unlearning by backdooring☆18Updated last year
- Federated Learning and Membership Inference Attacks experiments on CIFAR10☆19Updated 4 years ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆26Updated last year