da2so / Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch
☆12Updated 6 months ago
Alternatives and similar repositories for Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation:
Users that are interested in Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation are comparing it to the libraries listed below
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆44Updated 5 years ago
- Counterfactual Explanation Based on Gradual Construction for Deep Networks Pytorch☆11Updated 3 years ago
- ZSKD with PyTorch☆30Updated last year
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- ☆46Updated 4 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion☆21Updated 6 years ago
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆15Updated 4 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- Implementation of our NeurIPS 2018 paper: Deep Defense: Training DNNs with Improved Adversarial Robustness☆39Updated 6 years ago
- Code for "Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers"☆27Updated 2 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆25Updated 10 months ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 5 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆41Updated 4 years ago
- A general method for training cost-sensitive robust classifier☆22Updated 5 years ago
- This repository demonstrates the application of our proposed task-free continual learning method on a synthetic experiment.☆13Updated 5 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Code for Active Mixup in 2020 CVPR☆22Updated 3 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago
- ☆37Updated 3 years ago
- [ICLR 2021 Spotlight Oral] "Undistillable: Making A Nasty Teacher That CANNOT teach students", Haoyu Ma, Tianlong Chen, Ting-Kuei Hu, Che…☆81Updated 3 years ago
- An (imperfect) implementation of wide resnets and Parseval regularization☆9Updated 4 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.☆36Updated 3 years ago
- 1'st Place approach for CVPR 2020 Continual Learning Challenge☆46Updated 4 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 2 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated 2 years ago