advboxes / perceptron-benchmarkLinks
Robustness benchmark for DNN models.
☆67Updated 2 years ago
Alternatives and similar repositories for perceptron-benchmark
Users that are interested in perceptron-benchmark are comparing it to the libraries listed below
Sorting:
- For Competition on Adversarial Attacks and Defenses 2018☆40Updated 6 years ago
- Official Repository for the CVPR 2020 AdvML Workshop paper "Role of Spatial Context in Adversarial Robustness for Object Detection"☆36Updated 5 years ago
- CAAD 2018 winning submissions☆35Updated 6 years ago
- ☆41Updated last year
- Robustness vs Accuracy Survey on ImageNet☆98Updated 3 years ago
- The code of our paper: 'Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples', in Tensorflow.☆52Updated last month
- Adversarial Attacks and Defenses of Image Classifiers, NIPS 2017 competition track☆45Updated 7 years ago
- A targeted adversarial attack method, which won the NIPS 2017 targeted adversarial attacks competition☆133Updated 7 years ago
- A novel data-free model stealing method based on GAN☆127Updated 2 years ago
- Mitigating Adversarial Effects Through Randomization☆120Updated 7 years ago
- The implementation of 'Curls & Whey: Boosting Black-Box Adversarial Attacks' in pytorch☆60Updated 6 years ago
- white box adversarial attack☆38Updated 4 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 4 years ago
- Public release of code for Robust Physical-World Attacks on Deep Learning Visual Classification (Eykholt et al., CVPR 2018)☆109Updated 4 years ago
- MagNet: a Two-Pronged Defense against Adversarial Examples☆99Updated 6 years ago
- Implementation of the Biased Boundary Attack for ImageNet☆23Updated 5 years ago
- No.3 solution of Tianchi ImageNet Adversarial Attack Challenge.☆12Updated 5 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 6 years ago
- ☆85Updated 4 years ago
- Physical adversarial attack for fooling the Faster R-CNN object detector☆165Updated 5 years ago
- Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper☆38Updated 7 years ago
- Code used in 'Exploring the Space of Black-box Attacks on Deep Neural Networks' (https://arxiv.org/abs/1712.09491)☆61Updated 7 years ago
- Public repo for transferability ICLR 2017 paper☆52Updated 6 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆67Updated 7 years ago
- ☆15Updated 5 years ago
- Adversarial Examples: Attacks and Defenses for Deep Learning☆32Updated 7 years ago
- ☆49Updated 4 years ago
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆96Updated 4 years ago
- Improving Transferability of Adversarial Examples with Input Diversity☆164Updated 6 years ago
- ☆54Updated 2 years ago