chr5tphr / zennitLinks
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
☆239Updated this week
Alternatives and similar repositories for zennit
Users that are interested in zennit are comparing it to the libraries listed below
Sorting:
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆140Updated 3 weeks ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆636Updated 2 weeks ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆40Updated last year
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆139Updated 4 years ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆102Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆226Updated 3 years ago
- Dataset and code for the CLEVR-XAI dataset.☆33Updated 2 years ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆29Updated 6 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆26Updated last year
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆19Updated last week
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Updated 2 years ago
- LENS Project☆52Updated last year
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Updated 2 years ago
- ☆122Updated 3 years ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆731Updated last week
- Basic LRP implementation in PyTorch☆174Updated last year
- Pytorch implementation of various neural network interpretability methods☆119Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆165Updated 3 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- Detect model's attention☆170Updated 5 years ago
- ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy.☆29Updated 5 months ago
- Official PyTorch implementation of improved B-cos models☆55Updated last month
- A toolkit for quantitative evaluation of data attribution methods.☆55Updated 6 months ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆71Updated 2 years ago
- Mechanistic understanding and validation of large AI models with SemanticLens☆50Updated 2 months ago
- A fairness library in PyTorch.☆32Updated last year
- Source Code of the ROAD benchmark for feature attribution methods (ICML22)☆24Updated 2 years ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆102Updated 10 months ago
- Large-scale uncertainty benchmark in deep learning.☆64Updated 8 months ago