tianyu139 / friendly-noiseLinks
Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)
☆10Updated 2 years ago
Alternatives and similar repositories for friendly-noise
Users that are interested in friendly-noise are comparing it to the libraries listed below
Sorting:
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- ☆21Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 11 months ago
- ☆10Updated 3 years ago
- ☆10Updated 10 months ago
- ☆15Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆19Updated 2 years ago
- Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)☆21Updated 3 years ago
- ☆12Updated 3 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆15Updated last year
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆32Updated last year
- ☆32Updated last year
- ☆12Updated last year
- ☆54Updated 4 years ago
- ☆20Updated 2 weeks ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 2 weeks ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆36Updated 7 months ago
- ☆13Updated 4 years ago
- ☆13Updated last year
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆10Updated last year
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆13Updated last year
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆11Updated 2 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated 2 years ago
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆17Updated 2 years ago
- Camouflage poisoning via machine unlearning☆17Updated 4 months ago
- ☆18Updated 4 years ago