amparore / leaf
A Python framework for the quantitative evaluation of eXplainable AI methods
☆16Updated last year
Related projects: ⓘ
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆121Updated 3 years ago
- ☆16Updated last year
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆9Updated last year
- Adversarial Black box Explainer generating Latent Exemplars☆12Updated 2 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- Unified Model Interpretability Library for Time Series☆37Updated 7 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆75Updated last year
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆24Updated 2 years ago
- bayesian lime☆16Updated last month
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆191Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆125Updated 3 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆26Updated 5 months ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆40Updated 3 years ago
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆24Updated 6 months ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆69Updated last year
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆13Updated 2 years ago
- Pytorch implementation of various neural network interpretability methods☆110Updated 2 years ago
- TensorFlow 2 implementation of the paper Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution …☆45Updated 3 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆110Updated 3 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆191Updated 2 months ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆101Updated 5 months ago
- ☆118Updated 2 years ago
- In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results o…☆28Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆227Updated last month
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated 11 months ago
- A fairness library in PyTorch.☆26Updated last month
- ☆32Updated 2 months ago