aam-at / adversary_criticLinks
☆13Updated 5 years ago
Alternatives and similar repositories for adversary_critic
Users that are interested in adversary_critic are comparing it to the libraries listed below
Sorting:
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Updated 5 years ago
- Ensemble Adversarial Training on MNIST☆121Updated 8 years ago
- NIPS Adversarial Vision Challenge☆41Updated 7 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆163Updated 5 years ago
- A PyTorch baseline attack example for the NIPS 2017 adversarial competition☆86Updated 8 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Example code for the paper "Understanding deep learning requires rethinking generalization"☆178Updated 5 years ago
- ☆88Updated last year
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆257Updated 4 years ago
- SmoothGrad implementation in PyTorch☆172Updated 4 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 6 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- Adversarially Robust Neural Network on MNIST.☆63Updated 3 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆225Updated 5 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆228Updated 6 years ago
- Pytorch code to generate adversarial examples on mnist and ImageNet data.☆118Updated 6 years ago
- Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."☆35Updated 6 years ago
- Pytorch Adversarial Attack Framework☆78Updated 6 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Updated 5 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆338Updated 4 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated last year
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 6 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Notebooks for reproducing the paper "Computer Vision with a Single (Robust) Classifier"☆129Updated 6 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 7 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆390Updated 3 years ago
- Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)☆246Updated 6 years ago
- Code for paper "Dimensionality-Driven Learning with Noisy Labels" - ICML 2018☆58Updated last year