rachtibat / zennit-crpLinks
An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization
☆139Updated last year
Alternatives and similar repositories for zennit-crp
Users that are interested in zennit-crp are comparing it to the libraries listed below
Sorting:
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆239Updated 4 months ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆102Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆225Updated 3 years ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆634Updated 4 months ago
- Dataset and code for the CLEVR-XAI dataset.☆33Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆40Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models", our NeurIPS 2023 paper "Learning to Receive Help", and our ICML 2025 pa…☆71Updated 2 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- LENS Project☆51Updated last year
- ☆122Updated 3 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆139Updated 4 years ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆71Updated 2 years ago
- Mechanistic understanding and validation of large AI models with SemanticLens☆48Updated 2 weeks ago
- Official PyTorch implementation of improved B-cos models☆55Updated 2 months ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Reliability diagrams visualize whether a classifier model needs calibration☆162Updated 3 years ago
- Pytorch implementation of various neural network interpretability methods☆119Updated 3 years ago
- Open-source framework for uncertainty and deep learning models in PyTorch☆462Updated last week
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.☆38Updated 2 months ago
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆16Updated 3 years ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆723Updated this week
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆89Updated last year
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.☆52Updated last year
- Detect model's attention☆169Updated 5 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆54Updated 5 months ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 9 months ago
- 👋 Aligning Human & Machine Vision using explainability☆53Updated 2 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆145Updated 2 years ago