Jimmy-di / camouflage-poisoning
Camouflage poisoning via machine unlearning
☆17Updated 2 years ago
Alternatives and similar repositories for camouflage-poisoning:
Users that are interested in camouflage-poisoning are comparing it to the libraries listed below
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 5 months ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆19Updated 7 months ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆30Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆35Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆35Updated 6 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- ☆24Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- Code for Backdoor Attacks Against Dataset Distillation☆34Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- verifying machine unlearning by backdooring☆20Updated 2 years ago
- ☆11Updated 2 years ago
- Pytorch implementation of backdoor unlearning.☆17Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆49Updated 2 years ago
- ☆19Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- ☆19Updated 3 years ago
- ☆12Updated 3 years ago
- Likelihood Ratio Attack (LiRA) in PyTorch☆14Updated last month
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 3 years ago
- ☆25Updated 2 years ago
- ☆23Updated 10 months ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆67Updated last year
- Code for identifying natural backdoors in existing image datasets.☆15Updated 2 years ago
- The source code for ICML2021 paper When Does Data Augmentation Help With Membership Inference Attacks?☆8Updated 3 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆34Updated 7 months ago