rachtibat / zennit-crpLinks
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
☆130Updated last year
Alternatives and similar repositories for zennit-crp
Users that are interested in zennit-crp are comparing it to the libraries listed below
Sorting:
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆227Updated this week
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆96Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆213Updated 3 years ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆607Updated last week
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆137Updated 4 years ago
- ☆121Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 10 months ago
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆172Updated this week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆36Updated last year
- LENS Project☆48Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆63Updated last month
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆65Updated last year
- ☆13Updated 2 months ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆79Updated last year
- 👋 Overcomplete is a Vision-based SAE Toolbox☆67Updated 3 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆49Updated this week
- Official PyTorch implementation of improved B-cos models☆50Updated last year
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Pytorch implementation of various neural network interpretability methods☆118Updated 3 years ago
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆16Updated 3 weeks ago
- The repository contains lists of papers on causality and how relevant techniques are being used to further enhance deep learning era comp…☆94Updated last year
- 👋 Aligning Human & Machine Vision using explainability☆52Updated 2 years ago
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.☆31Updated last month
- Concept Bottleneck Models, ICML 2020☆205Updated 2 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆155Updated last year
- Large-scale uncertainty benchmark in deep learning.☆60Updated 2 months ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆106Updated last year