verifying machine unlearning by backdooring
☆20Mar 25, 2023Updated 3 years ago
Alternatives and similar repositories for unlearning-verification
Users that are interested in unlearning-verification are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆13Sep 6, 2023Updated 2 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Chain-PPFL: A Privacy-Preserving Federated Learning Framework based on Chained SMC☆37Jul 16, 2020Updated 5 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆22Mar 20, 2023Updated 3 years ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆14Mar 18, 2024Updated 2 years ago
- [NeurIPS 2022] JAX/Haiku implementation of "On Privacy and Personalization in Cross-Silo Federated Learning"☆27Apr 16, 2023Updated 3 years ago
- ☆46Nov 10, 2019Updated 6 years ago
- Membership Inference Attack on Federated Learning☆13Jan 14, 2022Updated 4 years ago
- ☆32Sep 2, 2024Updated last year
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- our submission for the microsoft membership inference competion at SaTML 2023☆15Apr 5, 2023Updated 3 years ago
- This is a python script to generate nice bibtex file for latex.☆19Mar 1, 2020Updated 6 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- ☆18May 30, 2025Updated 11 months ago
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆19Apr 28, 2025Updated last year
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year
- Membership inference against Federated learning.☆10May 30, 2021Updated 4 years ago
- ☆14Dec 8, 2022Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆17Jan 18, 2025Updated last year
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Duet: A Language for Differential Privacy☆17Jul 5, 2022Updated 3 years ago
- ☆46Oct 19, 2021Updated 4 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- Code for "ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking"☆15Jul 13, 2022Updated 3 years ago
- Source code for ECML-PKDD (2020) paper: FedMAX: Mitigating Activation Divergence for Accurate and Communication-Efficient Federated Learn…☆16Dec 27, 2022Updated 3 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆21Aug 15, 2022Updated 3 years ago
- The HyFed framework provides an easy-to-use API to develop federated, privacy-preserving machine learning algorithms.☆18Sep 10, 2025Updated 7 months ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- Code for paper "Byzantine-Resilient Distributed Finite-Sum Optimization over Networks"☆18Nov 5, 2020Updated 5 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆50May 20, 2022Updated 3 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- ☆30May 8, 2023Updated 2 years ago
- Research prototype of deletion efficient k-means algorithms☆24Dec 19, 2019Updated 6 years ago