sebastian-lapuschkin / lrp_toolbox
The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Python. The Toolbox realizes LRP functionality for the Caffe Deep Learning Framework as an extension of Caffe source code published in 10/2015.
☆332Updated 2 years ago
Alternatives and similar repositories for lrp_toolbox:
Users that are interested in lrp_toolbox are comparing it to the libraries listed below
- ☆100Updated 7 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,295Updated 3 weeks ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆224Updated 5 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆750Updated 4 years ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 6 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Public facing deeplift repo☆853Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆336Updated 3 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆624Updated 3 years ago
- Code for all experiments.☆318Updated 4 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆136Updated 4 years ago
- ☆134Updated 5 years ago
- Detect model's attention☆165Updated 4 years ago
- Tensorflow 2.1 implementation of LRP for LSTMs☆38Updated 2 years ago
- Towards Automatic Concept-based Explanations☆159Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆225Updated 9 months ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆973Updated last year
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆345Updated 4 years ago
- Full-gradient saliency maps☆210Updated 2 years ago
- Layerwise Relevance Propagation with Deep Taylor Series in TensorFlow☆71Updated 8 years ago
- ☆226Updated 4 years ago
- Code for the TCAV ML interpretability project☆639Updated 9 months ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆61Updated 5 years ago
- Tools for training explainable models using attribution priors.☆124Updated 4 years ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆347Updated 4 years ago
- Code for "High-Precision Model-Agnostic Explanations" paper☆802Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆208Updated 2 years ago
- The toolkit to explain Keras model predictions.☆15Updated 9 months ago
- Network Dissection http://netdissect.csail.mit.edu for quantifying interpretability of deep CNNs.☆446Updated 6 years ago