da2so / Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch
☆12Updated 7 months ago
Alternatives and similar repositories for Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation:
Users that are interested in Interpretable-Explanations-of-Black-Boxes-by-Meaningful-Perturbation are comparing it to the libraries listed below
- Counterfactual Explanation Based on Gradual Construction for Deep Networks Pytorch☆11Updated 4 years ago
- Label shift experiments☆16Updated 4 years ago
- ICML'20: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust☆15Updated 4 years ago
- Code for Overinterpretation paper☆19Updated last year
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversion☆21Updated 6 years ago
- 1'st Place approach for CVPR 2020 Continual Learning Challenge☆46Updated 4 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- NeurIPS Reproducbility Challenge 2019☆9Updated 5 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 5 years ago
- ☆20Updated 6 years ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- ☆27Updated 4 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- Implementation of the paper Identifying Mislabeled Data using the Area Under the Margin Ranking: https://arxiv.org/pdf/2001.10528v2.pdf☆21Updated 5 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 3 years ago
- Implementation "Adapting Auxiliary Losses Using Gradient Similarity" article☆32Updated 6 years ago
- Experiments on meta-learning algorithms to solve few-shot domain adaptation☆10Updated 3 years ago
- Official PyTorch implementation of “Flexible Dataset Distillation: Learn Labels Instead of Images”☆42Updated 4 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆44Updated 5 years ago
- ☆46Updated 4 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated 2 years ago
- Github for the conference paper GLOD-Gaussian Likelihood OOD detector☆16Updated 3 years ago
- ZSKD with PyTorch☆30Updated last year
- A regularized self-labeling approach to improve the generalization and robustness of fine-tuned models☆28Updated 2 years ago
- ☆30Updated 3 years ago
- Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019☆27Updated 5 years ago