wwoods / adversarial-explanations-cifarLinks
Code example for the paper, "Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness."
☆23Updated last year
Alternatives and similar repositories for adversarial-explanations-cifar
Users that are interested in adversarial-explanations-cifar are comparing it to the libraries listed below
Sorting:
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆58Updated 5 years ago
- PyTorch Implementation of CVPR'19 (oral) - Mitigating Information Leakage in Image Representations: A Maximum Entropy Approach☆28Updated 5 years ago
- A PyTorch Implementation of a Large Margin Deep Networks for Classification☆23Updated 6 years ago
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Z…☆125Updated 3 years ago
- Reverse Cross Entropy for Adversarial Detection (NeurIPS 2018)☆45Updated 4 years ago
- 'Robust Semantic Interpretability: Revisiting Concept Activation Vectors' Official Implementation☆11Updated 4 years ago
- [CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning☆85Updated 3 years ago
- ☆141Updated 4 years ago
- Visual Explanation using Uncertainty based Class Activation Maps☆23Updated 5 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- REPresentAtion bIas Removal (REPAIR) of datasets☆56Updated 2 years ago
- Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks☆66Updated 5 years ago
- [TNNLS 2019] Gaussian-based softmax: Improving Intra-class Compactness and Inter-class Separability of Features☆9Updated 6 years ago
- ☆46Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Accompanying code for the paper "Zero-shot Knowledge Transfer via Adversarial Belief Matching"☆141Updated 5 years ago
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 4 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- Source code accompanying our CVPR 2019 paper: "NetTailor: Tuning the architecture, not just the weights."☆53Updated 3 years ago
- Code for "Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers"☆27Updated 3 years ago
- code for "Training Interpretable Convolutional NeuralNetworks by Differentiating Class-specific Filters"☆28Updated this week
- Code for the paper "Adversarial Attacks Against Medical Deep Learning Systems"☆67Updated 6 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 6 years ago
- ☆73Updated 5 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- This repository contains some of the latest data augmentation techniques and optimizers for image classification using pytorch and the CI…☆29Updated 3 years ago
- In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)☆40Updated 5 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 3 years ago