Jimmy-di / camouflage-poisoningView external linksLinks
Camouflage poisoning via machine unlearning
☆19Jul 3, 2025Updated 7 months ago
Alternatives and similar repositories for camouflage-poisoning
Users that are interested in camouflage-poisoning are comparing it to the libraries listed below
Sorting:
- Codes for the ICLR 2022 paper: Trigger Hunting with a Topological Prior for Trojan Detection☆11Sep 19, 2023Updated 2 years ago
- ☆19Mar 26, 2022Updated 3 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- Official repo of the paper Deep Regression Unlearning accepted in ICML 2023☆14Jun 14, 2023Updated 2 years ago
- ☆69Feb 17, 2024Updated 2 years ago
- ☆21Oct 25, 2021Updated 4 years ago
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- ☆25Nov 12, 2022Updated 3 years ago
- ☆10Jul 28, 2022Updated 3 years ago
- ☆11Jan 25, 2022Updated 4 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression☆14Mar 22, 2025Updated 10 months ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Aug 17, 2023Updated 2 years ago
- [NeurIPS 2024] "Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection"☆13Oct 28, 2024Updated last year
- The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"☆17Jul 5, 2025Updated 7 months ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- ☆16Jul 17, 2022Updated 3 years ago
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- ☆21Sep 16, 2024Updated last year
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆16Jul 1, 2022Updated 3 years ago
- ☆18Jun 15, 2021Updated 4 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Feb 13, 2024Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- ☆19Jun 5, 2023Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆83Mar 12, 2024Updated last year
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)☆47Feb 28, 2023Updated 2 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- ☆22Sep 16, 2022Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Sep 9, 2024Updated last year
- ☆21Oct 25, 2023Updated 2 years ago
- Machine Unlearning for Random Forests☆22Jun 17, 2024Updated last year
- ☆27Oct 17, 2022Updated 3 years ago
- Defending Against Backdoor Attacks Using Robust Covariance Estimation☆22Jul 12, 2021Updated 4 years ago