rachtibat / zennit-crp
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
☆125Updated 10 months ago
Alternatives and similar repositories for zennit-crp:
Users that are interested in zennit-crp are comparing it to the libraries listed below
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆224Updated 9 months ago
- ☆12Updated 2 weeks ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆91Updated 2 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆136Updated 4 years ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆14Updated last year
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆153Updated last month
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆19Updated last year
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆208Updated 2 years ago
- LENS Project☆48Updated last year
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆28Updated this week
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆245Updated 8 months ago
- CoSy: Evaluating Textual Explanations☆16Updated 3 months ago
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆16Updated 2 years ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆62Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆598Updated 2 months ago
- 👋 Overcomplete is a Vision-based SAE Toolbox☆53Updated last month
- Official PyTorch implementation of improved B-cos models☆47Updated last year
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.☆30Updated last month
- ☆11Updated last year
- Python package for extracting representations from state-of-the-art computer vision models☆166Updated last week
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- A toolkit for quantitative evaluation of data attribution methods.☆45Updated 2 weeks ago
- 👋 Aligning Human & Machine Vision using explainability☆52Updated last year
- ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy.☆27Updated 3 weeks ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆61Updated last month
- ☆120Updated 3 years ago
- Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.☆15Updated 2 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆154Updated 11 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆42Updated 6 months ago