amparore / leafLinks
A Python framework for the quantitative evaluation of eXplainable AI methods
☆17Updated 2 years ago
Alternatives and similar repositories for leaf
Users that are interested in leaf are comparing it to the libraries listed below
Sorting:
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆218Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 2 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆138Updated 4 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- ☆18Updated 2 years ago
- ☆12Updated 2 years ago
- ☆122Updated 3 years ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆22Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆74Updated 3 years ago
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Reliability diagrams visualize whether a classifier model needs calibration☆158Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆627Updated 3 months ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Detect model's attention☆168Updated 5 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 7 years ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆375Updated 3 years ago
- Pytorch implementation of various neural network interpretability methods☆118Updated 3 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆134Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆233Updated 2 months ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- A fairness library in PyTorch.☆31Updated last year
- ☆33Updated last year
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆328Updated 10 months ago
- Code for our paper☆13Updated 3 years ago
- Dataset and code for the CLEVR-XAI dataset.☆32Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 3 years ago
- An amortized approach for calculating local Shapley value explanations☆101Updated last year
- bayesian lime☆18Updated last year
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 7 months ago