albermax / innvestigateLinks
A toolbox to iNNvestigate neural networks' predictions!
☆1,304Updated 6 months ago
Alternatives and similar repositories for innvestigate
Users that are interested in innvestigate are comparing it to the libraries listed below
Sorting:
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆759Updated 5 years ago
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆334Updated 3 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,035Updated last year
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆840Updated 3 years ago
- Public facing deeplift repo☆869Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆627Updated 3 months ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆640Updated 3 years ago
- Code for the TCAV ML interpretability project☆645Updated 4 months ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆987Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆233Updated 3 months ago
- ☆918Updated 2 years ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆346Updated 5 years ago
- Interpretability and explainability of data and machine learning models☆1,745Updated 8 months ago
- Layer-wise Relevance Propagation (LRP) for LSTMs.☆225Updated 5 years ago
- Tensorflow 2.1 implementation of LRP for LSTMs☆39Updated 2 years ago
- Neural network visualization toolkit for tf.keras☆335Updated 7 months ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆98Updated 7 years ago
- Generate Diverse Counterfactual Explanations for any machine learning model.☆1,460Updated 3 months ago
- XAI - An eXplainability toolbox for machine learning☆1,204Updated 4 years ago
- Code for "High-Precision Model-Agnostic Explanations" paper☆809Updated 3 years ago
- A collection of research materials on explainable AI/ML☆1,575Updated last month
- 👋 Xplique is a Neural Networks Explainability Toolbox☆711Updated this week
- Layers Outputs and Gradients in Keras. Made easy.☆1,053Updated 7 months ago
- Model interpretability and understanding for PyTorch☆5,458Updated last week
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆375Updated 3 years ago
- Detect model's attention☆168Updated 5 years ago
- Bayesian Deep Learning Benchmarks☆672Updated 2 years ago
- A simple way to calibrate your neural network.☆1,162Updated 3 months ago
- ☆612Updated 2 years ago
- ☆34Updated 2 years ago