nesl / Explainability-StudyLinks
How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods
☆23Updated 4 years ago
Alternatives and similar repositories for Explainability-Study
Users that are interested in Explainability-Study are comparing it to the libraries listed below
Sorting:
- Fine-grained ImageNet annotations☆29Updated 5 years ago
- Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch☆12Updated 9 months ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆35Updated last year
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- Parameter-Space Saliency Maps for Explainability☆23Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.☆51Updated 3 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Code for Overinterpretation paper☆19Updated last year
- ☆18Updated 3 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆41Updated 4 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- ☆38Updated 3 years ago
- Explanation Optimization☆13Updated 4 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- ICML 2020, Estimating Generalization under Distribution Shifts via Domain-Invariant Representations☆23Updated 4 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆18Updated 5 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- ☆45Updated 2 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 2 weeks ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆31Updated 5 years ago
- Code for "Supermasks in Superposition"☆124Updated last year
- Quantitative Testing with Concept Activation Vectors in PyTorch☆42Updated 6 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 5 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆75Updated 7 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- Geometric Certifications of Neural Nets☆42Updated 2 years ago