inspire-group / unlearning-verificationView external linksLinks
verifying machine unlearning by backdooring
☆20Mar 25, 2023Updated 2 years ago
Alternatives and similar repositories for unlearning-verification
Users that are interested in unlearning-verification are comparing it to the libraries listed below
Sorting:
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆14Mar 18, 2024Updated last year
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated last year
- Membership Inference Attack on Federated Learning☆12Jan 14, 2022Updated 4 years ago
- ☆45Nov 10, 2019Updated 6 years ago
- [NeurIPS 2022] JAX/Haiku implementation of "On Privacy and Personalization in Cross-Silo Federated Learning"☆27Apr 16, 2023Updated 2 years ago
- Chain-PPFL: A Privacy-Preserving Federated Learning Framework based on Chained SMC☆37Jul 16, 2020Updated 5 years ago
- ☆14Dec 8, 2022Updated 3 years ago
- Our submission for the Microsoft Membership Inference Competion at SaTML 2023☆15Apr 5, 2023Updated 2 years ago
- A modular evaluation metrics and a benchmark for large-scale federated learning☆13Jul 25, 2024Updated last year
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆16Jan 18, 2025Updated last year
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- Source code for ECML-PKDD (2020) paper: FedMAX: Mitigating Activation Divergence for Accurate and Communication-Efficient Federated Learn…☆16Dec 27, 2022Updated 3 years ago
- Code for paper "Byzantine-Resilient Distributed Finite-Sum Optimization over Networks"☆18Nov 5, 2020Updated 5 years ago
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Apr 28, 2025Updated 9 months ago
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- ☆21Mar 20, 2023Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Aug 15, 2022Updated 3 years ago
- The HyFed framework provides an easy-to-use API to develop federated, privacy-preserving machine learning algorithms.☆18Sep 10, 2025Updated 5 months ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- ☆47Oct 19, 2021Updated 4 years ago
- The official code of KDD22 paper "FLDetecotor: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clien…☆85Feb 23, 2023Updated 2 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆50May 20, 2022Updated 3 years ago
- Research prototype of deletion efficient k-means algorithms☆24Dec 19, 2019Updated 6 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- ☆21May 11, 2022Updated 3 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆53Sep 13, 2022Updated 3 years ago
- This is an implementation for paper "A Hybrid Approach to Privacy Preserving Federated Learning" (https://arxiv.org/pdf/1812.03224.pdf)☆24Jun 9, 2020Updated 5 years ago
- Implementation of calibration bounds for differential privacy in the shuffle model☆21Nov 10, 2020Updated 5 years ago
- ☆29May 8, 2023Updated 2 years ago
- Federated Learning and Membership Inference Attacks experiments on CIFAR10☆23Jan 29, 2020Updated 6 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Nov 22, 2021Updated 4 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆152Oct 3, 2022Updated 3 years ago