nesl / Explainability-Study
How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods
☆23Updated 4 years ago
Alternatives and similar repositories for Explainability-Study:
Users that are interested in Explainability-Study are comparing it to the libraries listed below
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- Parameter-Space Saliency Maps for Explainability☆23Updated 2 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Code accompanying paper: Meta-Learning to Improve Pre-Training☆37Updated 3 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- ☆53Updated 6 years ago
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)☆96Updated 2 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- Explanation Optimization☆13Updated 4 years ago
- Self-Explaining Neural Networks☆40Updated 5 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 4 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 6 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- Label shift experiments☆16Updated 4 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated last year
- ☆44Updated 2 years ago
- Learning perturbation sets for robust machine learning☆64Updated 3 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- code for the ICML paper "SelectiveNet - A Deep Neural Network with an Integrated Reject Option"☆46Updated 5 years ago
- ☆54Updated 4 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆74Updated 7 years ago
- Interpretation of Neural Network is Fragile☆36Updated 11 months ago
- Code to accompany the paper Radial Bayesian Neural Networks: Beyond Discrete Support In Large-Scale Bayesian Deep Learning☆33Updated 4 years ago
- ☆40Updated 4 years ago
- [ECCV 2018] code for Choose Your Neuron: Incorporating Domain Knowledge Through Neuron Importance☆57Updated 6 years ago