Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019
☆47Dec 8, 2022Updated 3 years ago
Alternatives and similar repositories for MultiRobustness
Users that are interested in MultiRobustness are comparing it to the libraries listed below
Sorting:
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆25Jul 25, 2024Updated last year
- Feature Scattering Adversarial Training (NeurIPS19)☆74Jun 1, 2024Updated last year
- Robustness for Non-Parametric Classification: A Generic Attack and Defense☆18Nov 21, 2022Updated 3 years ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆142Mar 30, 2020Updated 5 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Feb 15, 2020Updated 6 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆228Nov 9, 2019Updated 6 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago
- code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆17Dec 8, 2022Updated 3 years ago
- ☆42Dec 8, 2022Updated 3 years ago
- TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)☆553Mar 30, 2023Updated 2 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆52Nov 2, 2020Updated 5 years ago
- The Search for Sparse, Robustness Neural Networks☆11Mar 24, 2023Updated 2 years ago
- Comparison of gradient estimation techniques for black-box adversarial examples☆11Oct 31, 2018Updated 7 years ago
- SGD with large step sizes learns sparse features [ICML 2023]☆33Apr 24, 2023Updated 2 years ago
- Understanding and Improving Fast Adversarial Training [NeurIPS 2020]☆96Sep 23, 2021Updated 4 years ago
- A challenge to explore adversarial robustness of neural networks on CIFAR10.☆505Aug 30, 2021Updated 4 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Aug 5, 2020Updated 5 years ago
- Code for ICML 2019 paper "Simple Black-box Adversarial Attacks"☆200Mar 27, 2023Updated 2 years ago
- ☆15Dec 7, 2021Updated 4 years ago
- Connecting Interpretability and Robustness in Decision Trees through Separation☆17May 8, 2021Updated 4 years ago
- ☆38Jun 10, 2021Updated 4 years ago
- ☆20Jun 10, 2020Updated 5 years ago
- Spatially Transformed Adversarial Examples with TensorFlow☆75Nov 3, 2018Updated 7 years ago
- First-Order Adversarial Vulnerability of Neural Networks and Input Dimension☆15Sep 4, 2019Updated 6 years ago
- PatchAttack (ECCV 2020)☆64May 22, 2020Updated 5 years ago
- Implementation of the Biased Boundary Attack for ImageNet☆22Aug 18, 2019Updated 6 years ago
- Robust evasion attacks against neural network to find adversarial examples☆858Jun 1, 2021Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30May 16, 2022Updated 3 years ago
- Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks☆46Feb 24, 2022Updated 4 years ago
- Dissecting the weight space of neural networks☆18Apr 16, 2021Updated 4 years ago
- ☆16Dec 4, 2019Updated 6 years ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆58Oct 24, 2019Updated 6 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆61Nov 28, 2020Updated 5 years ago
- Code for our nips19 paper: You Only Propagate Once: Accelerating Adversarial Training Via Maximal Principle☆179Jul 25, 2024Updated last year
- Strongest attack against Feature Scatter and Adversarial Interpolation☆25Dec 26, 2019Updated 6 years ago
- Contest Proposal and infrastructure for the Unrestricted Adversarial Examples Challenge☆334Sep 17, 2020Updated 5 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆391Feb 16, 2022Updated 4 years ago
- ☆18Feb 16, 2023Updated 3 years ago
- Provable adversarial robustness at ImageNet scale☆406May 20, 2019Updated 6 years ago