samzabdiel / XAILinks
Papers and code of Explainable AI esp. w.r.t. Image classificiation
☆212Updated 2 years ago
Alternatives and similar repositories for XAI
Users that are interested in XAI are comparing it to the libraries listed below
Sorting:
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆129Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆96Updated 2 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆137Updated 4 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- Detect model's attention☆166Updated 4 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 10 months ago
- Pytorch implementation of various neural network interpretability methods☆117Updated 3 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆226Updated 11 months ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆14Updated last year
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆16Updated 2 years ago
- The repository contains lists of papers on causality and how relevant techniques are being used to further enhance deep learning era comp…☆93Updated last year
- Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation☆63Updated 3 years ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- ☆120Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆604Updated 4 months ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆318Updated 6 months ago
- bayesian lime☆17Updated 10 months ago
- Reliability diagrams visualize whether a classifier model needs calibration☆151Updated 3 years ago
- Concept Bottleneck Models, ICML 2020☆204Updated 2 years ago
- ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021☆101Updated 2 years ago
- Basic LRP implementation in PyTorch☆169Updated 11 months ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- [ICML 2023] Change is Hard: A Closer Look at Subpopulation Shift☆108Updated last year
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆366Updated 3 years ago
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago