chr5tphr / zennitLinks
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
☆233Updated 3 months ago
Alternatives and similar repositories for zennit
Users that are interested in zennit are comparing it to the libraries listed below
Sorting:
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆137Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆628Updated 3 months ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆98Updated 3 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆39Updated last year
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆139Updated 4 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆223Updated 3 years ago
- Dataset and code for the CLEVR-XAI dataset.☆32Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆28Updated 3 months ago
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- ☆122Updated 3 years ago
- Pytorch implementation of various neural network interpretability methods☆118Updated 3 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated last year
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆18Updated 4 months ago
- Basic LRP implementation in PyTorch☆172Updated last year
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated last year
- Detect model's attention☆168Updated 5 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆160Updated 3 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆53Updated 4 months ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆715Updated this week
- ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy.☆27Updated 3 months ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 8 months ago
- A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.☆582Updated last year
- LENS Project☆51Updated last year
- Mechanistic understanding and validation of large AI models with SemanticLens☆46Updated last month
- Concept Bottleneck Models, ICML 2020☆223Updated 2 years ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆376Updated 3 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- Code for RELAX, a framework for explaining representations.☆11Updated last year