The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Python. The Toolbox realizes LRP functionality for the Caffe Deep Learning Framework as an extension of Caffe source code published in 10/2015.
☆336Jun 13, 2022Updated 3 years ago
Alternatives and similar repositories for lrp_toolbox
Users that are interested in lrp_toolbox are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆100Mar 29, 2018Updated 8 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated last year
- Layerwise Relevance Propagation with Deep Taylor Series in TensorFlow☆72Jan 22, 2017Updated 9 years ago
- Basic LRP implementation in PyTorch☆174Jul 25, 2024Updated last year
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆758Aug 25, 2020Updated 5 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆97Aug 21, 2018Updated 7 years ago
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆17Aug 7, 2022Updated 3 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆243Jan 30, 2026Updated 3 months ago
- Explaining the unique nature of individual gait patterns with deep learning☆28Jan 18, 2021Updated 5 years ago
- ☆34Jan 3, 2023Updated 3 years ago
- Public facing deeplift repo☆871Apr 28, 2022Updated 4 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆994Mar 20, 2024Updated 2 years ago
- Prototyping about eXplainable Artificial Inteligence (XAI)☆26Jan 5, 2023Updated 3 years ago
- ☆18Jun 10, 2020Updated 5 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Gerster et al. 2022. "Separating neural oscillations from aperiodic 1/f activity: challenges and recommendations."☆15Dec 6, 2022Updated 3 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆29May 2, 2019Updated 6 years ago
- Lime: Explaining the predictions of any machine learning classifier☆12,121Jul 25, 2024Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Dec 8, 2022Updated 3 years ago
- Model interpretability and understanding for PyTorch☆5,614Updated this week
- Dataset and code for the CLEVR-XAI dataset.☆33Oct 3, 2023Updated 2 years ago
- ☆19Jun 15, 2020Updated 5 years ago
- Interpretability and explainability of data and machine learning models☆1,775Mar 18, 2026Updated last month
- Interpreting CNN Knowledge via an Explanatory Graph☆71Oct 25, 2020Updated 5 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,037Jun 3, 2024Updated last year
- Code for the TCAV ML interpretability project☆653Feb 5, 2026Updated 2 months ago
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers, Paper accepted at eXCV workshop of ECCV 2…☆30Jan 6, 2025Updated last year
- Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification☆96Feb 1, 2024Updated 2 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Jun 24, 2022Updated 3 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆44Apr 17, 2024Updated 2 years ago
- Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation☆67Feb 14, 2022Updated 4 years ago
- Network Dissection http://netdissect.csail.mit.edu for quantifying interpretability of deep CNNs.☆452Aug 25, 2018Updated 7 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Jul 24, 2022Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" 🐍☆46Nov 6, 2024Updated last year
- Pytorch Implementation of recent visual attribution methods for model interpretability☆146Feb 27, 2020Updated 6 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆659Apr 22, 2026Updated last week
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆854May 31, 2022Updated 3 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Jul 1, 2019Updated 6 years ago
- The official repository containing the source code to the explAIner publication.☆32Apr 29, 2024Updated 2 years ago
- DeepCrime - Mutation Testing Tool for Deep Learning Systems☆16Sep 23, 2023Updated 2 years ago