Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
☆103Mar 8, 2019Updated 7 years ago
Alternatives and similar repositories for adversarial-attacks
Users that are interested in adversarial-attacks are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Pytorch implementation with segmentation model and adversarial attacks☆14Oct 20, 2019Updated 6 years ago
- The repo is a source code for the project on Adversarial examples on Semantic Segmentation Networks☆13Sep 12, 2021Updated 4 years ago
- ☆12Mar 29, 2021Updated 4 years ago
- Adversarial Examples for Semantic Segmentation and Object Detection☆126Jan 30, 2018Updated 8 years ago
- ☆28Sep 22, 2022Updated 3 years ago
- Implementation of the Biased Boundary Attack for ImageNet☆22Aug 18, 2019Updated 6 years ago
- Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmen…☆60Jul 6, 2020Updated 5 years ago
- Attacking Optical Flow (ICCV 2019)☆59Apr 28, 2020Updated 5 years ago
- Track hijacking attack against Multiple-Object Tracking☆45Aug 27, 2019Updated 6 years ago
- ☆14Jul 25, 2020Updated 5 years ago
- ICLR 2019 Paper, "Characterizing Audio Adversarial Examples using Temporal Dependency".☆12Apr 3, 2019Updated 6 years ago
- Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"☆137Nov 25, 2020Updated 5 years ago
- Pytorch implementation of convolutional neural network adversarial attack techniques☆364Dec 3, 2018Updated 7 years ago
- Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” …☆21Sep 22, 2018Updated 7 years ago
- NIPS 2017 Adversarial Competition in PyTorch☆14Feb 4, 2018Updated 8 years ago
- Generative Adversarial Perturbations (CVPR 2018)☆137Dec 16, 2020Updated 5 years ago
- An adversarial attack on object detectors☆149Oct 12, 2021Updated 4 years ago
- Implementation of Papers on Adversarial Examples☆397Apr 24, 2023Updated 2 years ago
- [USENIX'23] TPatch: A Triggered Physical Adversarial Patch☆24Aug 8, 2023Updated 2 years ago
- ☆11Dec 6, 2020Updated 5 years ago
- Nuerapse simulations for SNNs☆25Oct 10, 2018Updated 7 years ago
- ☆24Apr 14, 2019Updated 6 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Sep 17, 2020Updated 5 years ago
- Robust Adversarial Perturbation on Deep Proposal-based Models☆25Jul 15, 2022Updated 3 years ago
- When can you tell whether an image has been cropped or not?☆29Sep 19, 2021Updated 4 years ago
- ImageNet classifier with state-of-the-art adversarial robustness☆685Dec 31, 2019Updated 6 years ago
- Mitigating Adversarial Effects Through Randomization☆120Mar 20, 2018Updated 8 years ago
- CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors☆15Nov 3, 2024Updated last year
- Pytorch implementation of "Hallucinating Agnostic Images to Generalize Across Domains"☆11Jul 10, 2019Updated 6 years ago
- Physical adversarial attack for fooling the Faster R-CNN object detector☆168Jan 13, 2020Updated 6 years ago
- Real-time object detection is one of the key applications of deep neural networks (DNNs) for real-world mission-critical systems. While D…☆134Apr 4, 2023Updated 2 years ago
- Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)☆107Jan 22, 2018Updated 8 years ago
- Knowledge Distillation with Adversarial Samples Supporting Decision Boundary (AAAI 2019)☆71Sep 9, 2019Updated 6 years ago
- Analysis of Adversarial Logit Pairing☆61Aug 13, 2018Updated 7 years ago
- Generalized Data-free Universal Adversarial Perturbations☆73Oct 5, 2018Updated 7 years ago
- A targeted adversarial attack method, which won the NIPS 2017 targeted adversarial attacks competition☆135May 29, 2018Updated 7 years ago
- Code for "Boosting Semi-supervised Image Segmentation with Global and Local Mutual Information Regularization"☆13Jul 14, 2021Updated 4 years ago
- Automated Simulations of Adversarial Attacks on Arbitrary Objects in Realistic Scenes☆14Oct 5, 2025Updated 5 months ago
- Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples☆906Jun 10, 2023Updated 2 years ago