verifying machine unlearning by backdooring
☆20Mar 25, 2023Updated 3 years ago
Alternatives and similar repositories for unlearning-verification
Users that are interested in unlearning-verification are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Chain-PPFL: A Privacy-Preserving Federated Learning Framework based on Chained SMC☆37Jul 16, 2020Updated 5 years ago
- ☆22Mar 20, 2023Updated 3 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆15Mar 18, 2024Updated 2 years ago
- [NeurIPS 2022] JAX/Haiku implementation of "On Privacy and Personalization in Cross-Silo Federated Learning"☆27Apr 16, 2023Updated 2 years ago
- ☆46Nov 10, 2019Updated 6 years ago
- Membership Inference Attack on Federated Learning☆12Jan 14, 2022Updated 4 years ago
- ☆32Sep 2, 2024Updated last year
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- our submission for the microsoft membership inference competion at SaTML 2023☆15Apr 5, 2023Updated 2 years ago
- Verkle trees with inner product argument (IPA) based polynomial commitment [Prototype]☆15Mar 26, 2022Updated 4 years ago
- This is a python script to generate nice bibtex file for latex.☆18Mar 1, 2020Updated 6 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- ☆18May 30, 2025Updated 9 months ago
- [USENIX Security'24] Lotto: Secure Participant Selection against Adversarial Servers in Federated Learning☆21Apr 28, 2025Updated 10 months ago
- A modular evaluation metrics and a benchmark for large-scale federated learning☆12Jul 25, 2024Updated last year
- ☆14Dec 8, 2022Updated 3 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- [NDSS 2025] CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling☆16Jan 18, 2025Updated last year
- Gradient-Leakage Resilient Federated Learning☆14Jul 25, 2022Updated 3 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Duet: A Language for Differential Privacy☆16Jul 5, 2022Updated 3 years ago
- ☆46Oct 19, 2021Updated 4 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- Code for "ObjectSeeker: Certifiably Robust Object Detection against Patch Hiding Attacks via Patch-agnostic Masking"☆14Jul 13, 2022Updated 3 years ago
- Source code for ECML-PKDD (2020) paper: FedMAX: Mitigating Activation Divergence for Accurate and Communication-Efficient Federated Learn…☆16Dec 27, 2022Updated 3 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆21Aug 15, 2022Updated 3 years ago
- The HyFed framework provides an easy-to-use API to develop federated, privacy-preserving machine learning algorithms.☆18Sep 10, 2025Updated 6 months ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆50May 20, 2022Updated 3 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- Code for paper "Byzantine-Resilient Distributed Finite-Sum Optimization over Networks"☆18Nov 5, 2020Updated 5 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- ☆29May 8, 2023Updated 2 years ago
- Research prototype of deletion efficient k-means algorithms☆24Dec 19, 2019Updated 6 years ago