rachtibat / zennit-crp
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
☆123Updated 8 months ago
Alternatives and similar repositories for zennit-crp:
Users that are interested in zennit-crp are comparing it to the libraries listed below
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆209Updated 7 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆33Updated 10 months ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆19Updated last year
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆13Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆86Updated 2 years ago
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆124Updated last week
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆583Updated last week
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆61Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆239Updated 6 months ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆133Updated 4 years ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- LENS Project☆46Updated 11 months ago
- ☆120Updated 2 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆57Updated 3 weeks ago
- A toolkit for quantitative evaluation of data attribution methods.☆39Updated this week
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆203Updated 2 years ago
- ☆11Updated last year
- 👋 Aligning Human & Machine Vision using explainability☆48Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Works…☆12Updated 8 months ago
- CoSy: Evaluating Textual Explanations☆14Updated 3 weeks ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆664Updated 4 months ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆27Updated 2 years ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆35Updated 3 months ago
- Uncertainty-aware representation learning (URL) benchmark☆100Updated 11 months ago
- Pytorch implementation of various neural network interpretability methods☆115Updated 2 years ago
- Python package for extracting representations from state-of-the-art computer vision models☆159Updated 3 weeks ago
- Official PyTorch implementation of improved B-cos models☆45Updated 11 months ago
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information☆32Updated 3 years ago
- A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept dat…☆88Updated 10 months ago