uclaml / Frank-Wolfe-AdvMLLinks
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks (AAAI'20)
☆11Updated 5 years ago
Alternatives and similar repositories for Frank-Wolfe-AdvML
Users that are interested in Frank-Wolfe-AdvML are comparing it to the libraries listed below
Sorting:
- An efficient adversarial defense method with strong insights which won the fifth place of the IJCAI-2019 Alibaba Adversarial AI Challen…☆12Updated 6 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆53Updated 5 years ago
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆33Updated 4 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆29Updated 7 years ago
- Universal Adversarial Networks☆32Updated 7 years ago
- [CVPR'19] Trust Region Based Adversarial Attack☆20Updated 5 years ago
- Pytorch code for ens_adv_train☆16Updated 6 years ago
- Improving the Generalization of Adversarial Training with Domain Adaptation☆33Updated 6 years ago
- Codes for reproducing the black-box adversarial attacks in “ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Network…☆64Updated 6 years ago
- ☆48Updated 4 years ago
- Code for Stability Training with Noise (STN)☆22Updated 5 years ago
- Feature Scattering Adversarial Training (NeurIPS19)☆74Updated last year
- ☆21Updated 6 years ago
- ☆42Updated 2 years ago
- Code for FAB-attack☆34Updated 5 years ago
- Implementation of the Biased Boundary Attack for ImageNet☆22Updated 6 years ago
- This repository contains the official PyTorch implementation of GeoDA algorithm. GeoDA is a Black-box attack to generate adversarial exam…☆34Updated 4 years ago
- white box adversarial attack☆38Updated 4 years ago
- Codes for reproducing query-efficient black-box attacks in “AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking B…☆59Updated 5 years ago
- Image Super-Resolution as a Defense Against Adversarial Attacks☆89Updated 7 years ago
- ☆57Updated 2 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆24Updated 3 years ago
- Spatially Transformed Adversarial Examples with TensorFlow☆75Updated 7 years ago
- AAAI 2019 oral presentation☆53Updated 7 months ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- Implementation of the Boundary Attack algorithm as described in Brendel, Wieland, Jonas Rauber, and Matthias Bethge. "Decision-Based Adve…☆99Updated 5 years ago
- A fast sparse attack on deep neural networks.☆51Updated 5 years ago
- Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"☆64Updated 6 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Updated 5 years ago