alewarne / Layerwise-Relevance-Propagation-for-LSTMsLinks
Tensorflow 2.1 implementation of LRP for LSTMs
☆38Updated 2 years ago
Alternatives and similar repositories for Layerwise-Relevance-Propagation-for-LSTMs
Users that are interested in Layerwise-Relevance-Propagation-for-LSTMs are comparing it to the libraries listed below
Sorting:
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆225Updated 5 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆136Updated 4 years ago
- Implementation of layer-wise relevance propagation.☆8Updated 6 years ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆332Updated 2 years ago
- Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification☆96Updated last year
- ☆28Updated 4 months ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 6 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- ☆91Updated 3 years ago
- ☆17Updated 4 years ago
- Code associated with ACM-CHIL 21 paper 'T-DPSOM - An Interpretable Clustering Method for Unsupervised Learning of Patient Health States'☆69Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆226Updated 10 months ago
- Local explanations with uncertainty 💐!☆40Updated last year
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- Pytorch implementation of "Exploring Interpretable LSTM Neural Networks over Multi-Variable Data" https://arxiv.org/pdf/1905.12034.pdf☆109Updated 5 years ago
- ☆34Updated 2 years ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆41Updated 3 years ago
- Adversarial Attacks on Deep Neural Networks for Time Series Classification☆77Updated 4 years ago
- ☆100Updated 7 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 9 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆599Updated 3 months ago
- Implementation of the InterpretTime framework☆46Updated 2 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆75Updated 7 years ago
- Code and documentation for experiments in the TreeExplainer paper☆186Updated 5 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆210Updated 2 years ago
- Pytorch implementation of various neural network interpretability methods☆117Updated 3 years ago
- Bayesian LSTM (Tensorflow)☆54Updated 2 years ago
- Repository of the ICML 2020 paper "Set Functions for Time Series"☆126Updated 4 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago