amparore / leafLinks
A Python framework for the quantitative evaluation of eXplainable AI methods
☆17Updated 2 years ago
Alternatives and similar repositories for leaf
Users that are interested in leaf are comparing it to the libraries listed below
Sorting:
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆214Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 11 months ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆138Updated 4 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆83Updated 2 years ago
- ☆17Updated 2 years ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆42Updated 4 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆154Updated 3 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆229Updated last week
- ☆121Updated 3 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆131Updated last year
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆371Updated 3 years ago
- Neural Additive Models (Google Research)☆71Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆613Updated 3 weeks ago
- Code for our paper☆13Updated 3 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆64Updated 2 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- Pytorch implementation of various neural network interpretability methods☆118Updated 3 years ago
- Detect model's attention☆167Updated 5 years ago
- Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification☆96Updated last year
- A fairness library in PyTorch.☆29Updated last year
- Concept Bottleneck Models, ICML 2020☆208Updated 2 years ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- Neural Additive Models (Google Research)☆28Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆291Updated last year
- For calculating Shapley values via linear regression.☆70Updated 4 years ago
- TensorFlow 2 implementation of the paper Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution …☆45Updated 3 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 7 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago