lapalap / B-LRP
B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks
☆18Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for B-LRP
- Combating hidden stratification with GEORGE☆62Updated 3 years ago
- ☆34Updated 4 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- ☆46Updated 4 years ago
- ☆16Updated 4 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- Self-Explaining Neural Networks☆39Updated 4 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 2 years ago
- An Empirical Framework for Domain Generalization In Clinical Settings☆28Updated 2 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.☆40Updated 2 years ago
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year
- h-Shap provides an exact, fast, hierarchical implementation of Shapley coefficients for image explanations☆15Updated last year
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Implementation of the spotlight: a method for discovering systematic errors in deep learning models☆10Updated 3 years ago
- Code and data of the "Multi-domain adversarial learning" paper, Schoenauer-Sebag et al., accepted at ICLR 2019☆38Updated 3 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- 'Robust Semantic Interpretability: Revisiting Concept Activation Vectors' Official Implementation☆11Updated 4 years ago
- Algorithms for abstention, calibration and domain adaptation to label shift.☆36Updated 4 years ago
- ☆17Updated 5 years ago
- Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude’s Variance Matters☆30Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆125Updated 3 years ago
- Tools for training explainable models using attribution priors.☆121Updated 3 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆60Updated 3 years ago
- ☆41Updated last year
- Visual Explanation using Uncertainty based Class Activation Maps☆21Updated 4 years ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆21Updated last year
- This repository is all about papers and tools of Explainable AI☆36Updated 4 years ago