alewarne / Layerwise-Relevance-Propagation-for-LSTMs
Tensorflow 2.1 implementation of LRP for LSTMs
☆37Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Layerwise-Relevance-Propagation-for-LSTMs
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆222Updated 4 years ago
- Implementation of layer-wise relevance propagation.☆7Updated 5 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆127Updated 3 years ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆330Updated 2 years ago
- ☆17Updated 4 years ago
- ☆25Updated last year
- Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification☆95Updated 9 months ago
- Pytorch implementation of "Exploring Interpretable LSTM Neural Networks over Multi-Variable Data" https://arxiv.org/pdf/1905.12034.pdf☆104Updated 5 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆203Updated 4 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- ☆86Updated 3 years ago
- This repository provides details of the experimental code in the paper: Instance-based Counterfactual Explanations for Time Series Classi…☆18Updated 3 years ago
- ☆99Updated 6 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆125Updated 3 years ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆97Updated 6 years ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,268Updated 11 months ago
- ☆35Updated last year
- The toolkit to explain Keras model predictions.☆15Updated 3 months ago
- Counterfactual Explanations for Multivariate Time Series Data☆29Updated 8 months ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆40Updated 3 years ago
- Code associated with ACM-CHIL 21 paper 'T-DPSOM - An Interpretable Clustering Method for Unsupervised Learning of Patient Health States'☆66Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆560Updated last week
- Implementation of the MNIST experiment for Monte Carlo Dropout from http://mlg.eng.cam.ac.uk/yarin/PDFs/NIPS_2015_bayesian_convnets.pdf☆30Updated 4 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆118Updated 5 months ago
- Local explanations with uncertainty 💐!☆39Updated last year
- This repository contains the implementation of Dynamask, a method to identify the features that are salient for a model to issue its pred…☆75Updated 2 years ago
- Pytorch implementation of various neural network interpretability methods☆112Updated 2 years ago
- Code for "Interpolation-Prediction Networks for Irregularly Sampled Time Series", ICLR 2019.☆94Updated 3 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆734Updated 4 years ago