nesl / Explainability-Study
How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods
☆23Updated 4 years ago
Alternatives and similar repositories for Explainability-Study
Users that are interested in Explainability-Study are comparing it to the libraries listed below
Sorting:
- Parameter-Space Saliency Maps for Explainability☆23Updated 2 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)☆97Updated 2 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆31Updated 5 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 2 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- ☆19Updated 4 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Implementation for What it Thinks is Important is Important: Robustness Transfers through Input Gradients (CVPR 2020 Oral)☆16Updated 2 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 4 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- ☆27Updated 4 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- Pytorch implementation of "Hallucinating Agnostic Images to Generalize Across Domains"☆11Updated 5 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Mathematical consequences of orthogonal weights initialization and regularization in deep learning. Experiments with gain-adjusted orthog…☆17Updated 5 years ago
- Official repository for "Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps".☆34Updated 5 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆42Updated 6 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- ICML 2020, Estimating Generalization under Distribution Shifts via Domain-Invariant Representations☆23Updated 4 years ago
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆18Updated 5 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 5 years ago
- ☆45Updated 2 years ago
- ☆45Updated 4 years ago