cleverhans-lab / verifiable-unlearning
☆16Updated last year
Alternatives and similar repositories for verifiable-unlearning:
Users that are interested in verifiable-unlearning are comparing it to the libraries listed below
- ☆28Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆23Updated 4 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆53Updated 5 years ago
- Learning from history for Byzantine Robustness☆22Updated 3 years ago
- Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent (ICLR 2021)☆20Updated 3 years ago
- An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)☆26Updated last year
- ☆54Updated last year
- A simple Python implementation of a secure aggregation protocole for federated learning.☆34Updated last year
- ☆32Updated last year
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆72Updated 3 years ago
- ☆38Updated 3 years ago
- ☆23Updated 3 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- ☆34Updated 3 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆20Updated 7 months ago
- A sybil-resilient distributed learning protocol.☆99Updated last year
- ☆16Updated 2 years ago
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆12Updated last year
- Code for the CCS'22 paper "Federated Boosted Decision Trees with Differential Privacy"☆43Updated last year
- IEEE TIFS'20: VeriFL: Communication-Efficient and Fast Verifiable Aggregation for Federated Learning☆23Updated 2 years ago
- ☆41Updated 8 months ago
- Pytorch implementation of backdoor unlearning.☆17Updated 2 years ago
- reveal the vulnerabilities of SplitNN☆30Updated 2 years ago
- A secure aggregation system for private federated learning☆38Updated 8 months ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆82Updated last year
- ☆31Updated 4 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆67Updated last year
- ☆36Updated 3 years ago
- Aggregation Service for Federated Learning: An Efficient, Secure, and More Resilient Realization☆9Updated 3 years ago
- ☆37Updated last year