theyoucheng / deepcover
DeepCover: Uncover the truth behind AI
☆32Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for deepcover
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆57Updated 6 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 6 years ago
- ☆12Updated 5 years ago
- Code/figures in Right for the Right Reasons☆55Updated 3 years ago
- Geometric Certifications of Neural Nets☆41Updated last year
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- ☆20Updated 3 months ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 4 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆40Updated 5 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 5 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆43Updated 4 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆46Updated last year
- An (imperfect) implementation of wide resnets and Parseval regularization☆8Updated 4 years ago
- ☆26Updated 5 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Robustness for Non-Parametric Classification: A Generic Attack and Defense☆18Updated last year
- PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion☆21Updated 5 years ago
- Repository with code for paper "Inhibited Softmax for Uncertainty Estimation in Neural Networks"☆25Updated 5 years ago
- Caffe code for the paper "Adversarial Manipulation of Deep Representations"☆16Updated 7 years ago
- reproduction of Thermometer Encoding: One Hot Way To Resist Adversarial Examples in pytorch☆16Updated 6 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 5 years ago
- ☆25Updated 2 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 5 years ago
- Implementation of Bayesian NNs in Pytorch (https://arxiv.org/pdf/1703.02910.pdf) (With some help from https://github.com/Riashat/Deep-Ba…☆31Updated 3 years ago
- Implementation of the Deep Frank-Wolfe Algorithm -- Pytorch☆61Updated 3 years ago
- Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors☆62Updated 4 years ago
- Certifying Geometric Robustness of Neural Networks☆15Updated last year
- Circumventing the defense in "Ensemble Adversarial Training: Attacks and Defenses"☆39Updated 6 years ago
- Repository for our ICCV 2019 paper: Adversarial Defense via Learning to Generate Diverse Attacks☆21Updated 3 years ago