cure-lab / ContraNetLinks
This is the official implementation of ContraNet (NDSS2022).
☆21Updated 2 years ago
Alternatives and similar repositories for ContraNet
Users that are interested in ContraNet are comparing it to the libraries listed below
Sorting:
- Revisiting Transferable Adversarial Images (TPAMI 2025)☆140Updated 4 months ago
- ☆53Updated 4 years ago
- ☆63Updated 4 years ago
- Paper list of Adversarial Examples☆52Updated 2 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Updated 4 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆133Updated last year
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Updated 4 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆61Updated last year
- ☆54Updated 4 years ago
- ☆69Updated last year
- ☆18Updated 4 years ago
- ☆83Updated 4 years ago
- A curated list of papers for the transferability of adversarial examples☆76Updated last year
- TIFS2022: Decision-based Adversarial Attack with Frequency Mixup☆22Updated 2 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Updated 4 years ago
- A paper list for localized adversarial patch research☆160Updated 6 months ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Updated 3 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆105Updated 3 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆46Updated 9 months ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆37Updated last year
- ☆25Updated 3 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18Updated 6 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆133Updated 2 years ago
- ☆20Updated 2 years ago
- ☆22Updated 3 years ago
- This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.☆27Updated 2 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆46Updated 2 years ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆94Updated 3 years ago