MadryLab / robustness_libLinks
☆11Updated 6 years ago
Alternatives and similar repositories for robustness_lib
Users that are interested in robustness_lib are comparing it to the libraries listed below
Sorting:
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆62Updated 4 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆51Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- Computing various norms/measures on over-parametrized neural networks☆50Updated 6 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 7 years ago
- Implementation of Information Dropout☆39Updated 8 years ago
- Implementation of Methods Proposed in Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks (NeurIPS 2019)☆35Updated 5 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆61Updated 7 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- ☆30Updated 6 years ago
- Repository with code for paper "Inhibited Softmax for Uncertainty Estimation in Neural Networks"☆25Updated 6 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated last year
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆128Updated 6 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆31Updated 6 years ago
- Geometric Certifications of Neural Nets☆42Updated 2 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Updated 5 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 6 years ago
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 6 years ago
- TensorFlow implementation of "noisy K-FAC" and "noisy EK-FAC".☆60Updated 6 years ago
- Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors☆62Updated 5 years ago
- ☆61Updated 2 years ago
- Code for ICML 2018 paper on "Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam" by Khan, Nielsen, Tangkaratt, Lin, …☆112Updated 6 years ago
- ☆25Updated 5 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 6 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 4 years ago
- Comparison of gradient estimation techniques for black-box adversarial examples☆11Updated 7 years ago