MadryLab / AdvEx_TutorialLinks
☆15Updated 5 years ago
Alternatives and similar repositories for AdvEx_Tutorial
Users that are interested in AdvEx_Tutorial are comparing it to the libraries listed below
Sorting:
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated last year
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆227Updated 5 years ago
- ☆68Updated 6 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆51Updated 5 years ago
- Learning perturbation sets for robust machine learning☆65Updated 4 years ago
- Adversarially Robust Neural Network on MNIST.☆63Updated 3 years ago
- Source code for the paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness"☆25Updated 5 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 6 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆162Updated 5 years ago
- ☆13Updated 6 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆186Updated 5 years ago
- ☆88Updated last year
- Data, code & materials from the paper "Generalisation in humans and deep neural networks" (NeurIPS 2018)☆96Updated 2 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆96Updated 4 years ago
- Robust Vision Benchmark☆23Updated 7 years ago
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆128Updated 5 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆65Updated 6 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Updated 5 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- Information Bottlenecks for Attribution☆82Updated 2 years ago
- A pytorch implementation of our jacobian regularizer to encourage learning representations more robust to input perturbations.☆129Updated 2 years ago
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- ☆21Updated last year
- Interpretation of Neural Network is Fragile☆36Updated last year
- Geometric Certifications of Neural Nets☆42Updated 2 years ago