This repository is for NeurIPS 2018 spotlight paper "Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples."
☆31Apr 27, 2022Updated 3 years ago
Alternatives and similar repositories for AmI
Users that are interested in AmI are comparing it to the libraries listed below
Sorting:
- ☆11Sep 20, 2019Updated 6 years ago
- Code for generating adversarial color-shifted images☆19Nov 11, 2019Updated 6 years ago
- Implementation of our NeurIPS 2019 paper: Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks☆10Dec 16, 2019Updated 6 years ago
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago
- Code for "Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling" [ICML 2021]☆10Mar 14, 2022Updated 3 years ago
- [NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks☆33Jul 5, 2024Updated last year
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- ☆12Mar 15, 2019Updated 6 years ago
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- Benchmarking and Visualization Tool for Adversarial Machine Learning☆188Apr 4, 2023Updated 2 years ago
- Transferable Adversarial Attacks for Image and Video Object Detection☆14Jul 7, 2020Updated 5 years ago
- [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness☆17Jul 5, 2024Updated last year
- Information Bottleneck Approach to Spatial Attention Learning, IJCAI2021☆15Jun 1, 2021Updated 4 years ago
- ZOSVRG-BlackBox-Adv☆13Oct 30, 2018Updated 7 years ago
- ☆19Mar 26, 2022Updated 3 years ago
- ☆18Aug 15, 2022Updated 3 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16May 9, 2019Updated 6 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆69Mar 19, 2018Updated 7 years ago
- ☆15Jul 23, 2020Updated 5 years ago
- ☆20Feb 11, 2024Updated 2 years ago
- Code of our recently published attack FDA: Feature Disruptive Attack. Colab Notebook: https://colab.research.google.com/drive/1WhkKCrzFq5…☆21Nov 11, 2019Updated 6 years ago
- ☆79Oct 20, 2019Updated 6 years ago
- Mitigating Adversarial Effects Through Randomization☆120Mar 20, 2018Updated 7 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Feb 10, 2023Updated 3 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Apr 25, 2020Updated 5 years ago
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆70Aug 14, 2025Updated 6 months ago
- ☆25Apr 5, 2022Updated 3 years ago
- Code for NDSS paper: Stealthy Adversarial Perturbations Against Real-Time Video Classification Systems☆21Nov 24, 2018Updated 7 years ago
- Robustness vs Accuracy Survey on ImageNet☆99Aug 3, 2021Updated 4 years ago
- Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"☆200Mar 27, 2023Updated 2 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSM☆449Jul 25, 2024Updated last year
- MagNet: a Two-Pronged Defense against Adversarial Examples☆101Oct 13, 2018Updated 7 years ago
- ☆25Jun 23, 2021Updated 4 years ago
- A challenge to explore adversarial robustness of neural networks on CIFAR10.☆505Aug 30, 2021Updated 4 years ago
- Public repo for transferability ICLR 2017 paper☆53Jan 3, 2019Updated 7 years ago
- ☆25Mar 24, 2023Updated 2 years ago
- An Elegant Library for Bayesian Deep Learning in PyTorch☆27Dec 19, 2022Updated 3 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Dec 4, 2022Updated 3 years ago