Gwinhen / AmILinks
This repository is for NeurIPS 2018 spotlight paper "Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples."
☆31Updated 3 years ago
Alternatives and similar repositories for AmI
Users that are interested in AmI are comparing it to the libraries listed below
Sorting:
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆28Updated 6 years ago
- A rich-documented PyTorch implementation of Carlini-Wagner's L2 attack.☆60Updated 7 years ago
- Spatially Transformed Adversarial Examples with TensorFlow☆75Updated 6 years ago
- ☆48Updated 4 years ago
- AAAI 2019 oral presentation☆52Updated 3 months ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆125Updated 4 years ago
- Code implementation of the paper "With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning", at USENIX …☆20Updated 6 years ago
- Generalized Data-free Universal Adversarial Perturbations☆70Updated 6 years ago
- A simple implement of an Adversarial Autoencoding ATN(AAE ATN)☆30Updated 8 years ago
- Robustness vs Accuracy Survey on ImageNet☆98Updated 4 years ago
- Mitigating Adversarial Effects Through Randomization☆120Updated 7 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆111Updated 7 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆67Updated 7 years ago
- StrAttack, ICLR 2019☆33Updated 6 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆61Updated 4 years ago
- ☆11Updated 5 years ago
- PyTorch library for adversarial attack and training☆146Updated 6 years ago
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks☆169Updated 4 years ago
- Code for FAB-attack☆33Updated 5 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆64Updated 6 years ago
- ☆56Updated 2 years ago
- Interval attacks (adversarial ML)☆21Updated 6 years ago
- Feature Scattering Adversarial Training (NeurIPS19)☆74Updated last year
- Code for generating adversarial color-shifted images☆19Updated 5 years ago
- Detect adversarial images from intermediate features in distance space☆12Updated 7 years ago
- ☆85Updated 4 years ago
- Code for Black-Box Adversarial Attack with Transferable Model-based Embedding☆58Updated 5 years ago