amparore / leaf
A Python framework for the quantitative evaluation of eXplainable AI methods
☆17Updated 2 years ago
Alternatives and similar repositories for leaf:
Users that are interested in leaf are comparing it to the libraries listed below
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ☆17Updated last year
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆135Updated 4 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆204Updated 2 years ago
- bayesian lime☆17Updated 7 months ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆89Updated 2 years ago
- ☆11Updated last year
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- ☆33Updated 9 months ago
- TensorFlow 2 implementation of the paper Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution …☆45Updated 3 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- This repository contains the implementation of Label-Free XAI, a new framework to adapt explanation methods to unsupervised models. For m…☆24Updated 2 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆59Updated this week
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆124Updated 9 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- This repository is all about papers and tools of Explainable AI☆36Updated 5 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated 11 months ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆10Updated 2 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆43Updated 8 months ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆358Updated 2 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆42Updated 6 years ago
- An amortized approach for calculating local Shapley value explanations☆97Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆240Updated 7 months ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆40Updated 3 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆22Updated 4 years ago