IBM / devil-in-GANLinks
this is a repo for the demo on backdoor attacks on StyleGAN and WaveGAN
☆19Updated 4 years ago
Alternatives and similar repositories for devil-in-GAN
Users that are interested in devil-in-GAN are comparing it to the libraries listed below
Sorting:
- Implemention of "Robust Watermarking of Neural Network with Exponential Weighting" in TensorFlow.☆13Updated 5 years ago
- ☆20Updated 3 months ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Updated 11 months ago
- ☆15Updated 2 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Updated 4 years ago
- Code Implementation for Gotta Catch ’Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks☆32Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Updated 3 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆35Updated 2 years ago
- ☆25Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- ☆44Updated 2 years ago
- ☆26Updated 7 years ago
- ☆27Updated 3 years ago
- ☆19Updated 2 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated 2 years ago
- ☆13Updated 4 years ago
- ☆32Updated 3 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Updated 2 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Updated 3 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Updated 3 years ago
- ☆23Updated 5 years ago
- Source code for the Energy-Latency Attacks via Sponge Poisoning paper.☆15Updated 3 years ago
- This is the official implementation of our paper 'Black-box Dataset Ownership Verification via Backdoor Watermarking'.☆26Updated 2 years ago
- Camouflage poisoning via machine unlearning☆19Updated 7 months ago
- Protect your machine learning models easily and securely with watermarking 🔑☆97Updated last year
- ☆14Updated last year
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Updated 3 months ago
- Implementations of data poisoning attacks against neural networks and related defenses.☆102Updated last year