aam-at / adversary_criticLinks
☆13Updated 5 years ago
Alternatives and similar repositories for adversary_critic
Users that are interested in adversary_critic are comparing it to the libraries listed below
Sorting:
- Related materials for robust and explainable machine learning☆48Updated 7 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- NIPS Adversarial Vision Challenge☆41Updated 6 years ago
- A PyTorch baseline attack example for the NIPS 2017 adversarial competition☆86Updated 8 years ago
- Ensemble Adversarial Training on MNIST☆121Updated 8 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆223Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 5 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆186Updated 4 years ago
- Example code for the paper "Understanding deep learning requires rethinking generalization"☆178Updated 5 years ago
- ☆88Updated last year
- Adversarially Robust Neural Network on MNIST.☆63Updated 3 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 6 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆392Updated 3 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- Pytorch code to generate adversarial examples on mnist and ImageNet data.☆117Updated 6 years ago
- Variational Auto Encoder☆39Updated 8 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Updated 5 years ago
- Code for "Learning Perceptually-Aligned Representations via Adversarial Robustness"☆162Updated 5 years ago
- Countering Adversarial Image using Input Transformations.☆496Updated 3 years ago
- SmoothGrad implementation in PyTorch☆172Updated 4 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated last year
- Code for paper "Dimensionality-Driven Learning with Noisy Labels" - ICML 2018☆58Updated last year
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 6 years ago
- A DIRT-T Approach to Unsupervised Domain Adaptation (ICLR 2018)☆176Updated 7 years ago
- Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."☆34Updated 6 years ago
- Tensorflow Implementation of Adversarial Attack to Capsule Networks☆172Updated 7 years ago
- Pytorch Adversarial Attack Framework☆78Updated 6 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆226Updated 5 years ago
- ☆26Updated 6 years ago