sebastian-lapuschkin / lrp_toolbox
The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Python. The Toolbox realizes LRP functionality for the Caffe Deep Learning Framework as an extension of Caffe source code published in 10/2015.
☆331Updated 2 years ago
Alternatives and similar repositories for lrp_toolbox:
Users that are interested in lrp_toolbox are comparing it to the libraries listed below
- ☆100Updated 6 years ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆223Updated 4 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆745Updated 4 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,290Updated last year
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 6 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Detect model's attention☆165Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- ☆109Updated 2 years ago
- Towards Automatic Concept-based Explanations☆159Updated 10 months ago
- ☆133Updated 5 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆214Updated 8 months ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆334Updated 3 years ago
- ☆51Updated 4 years ago
- Public facing deeplift repo☆852Updated 2 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆135Updated 4 years ago
- Layerwise Relevance Propagation with Deep Taylor Series in TensorFlow☆71Updated 8 years ago
- Code for the TCAV ML interpretability project☆634Updated 7 months ago
- Pytorch implementation of various neural network interpretability methods☆116Updated 3 years ago
- Implementations of some popular Saliency Maps in Keras☆165Updated 5 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆594Updated last month
- The toolkit to explain Keras model predictions.☆15Updated 7 months ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆346Updated 4 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆615Updated 3 years ago
- ☆578Updated last year
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆130Updated 4 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆124Updated 9 months ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆970Updated last year