hiranumn / IntegratedGradientsTF
Tensorflow implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks". It explains connections between two tensors.
☆16Updated 6 years ago
Alternatives and similar repositories for IntegratedGradientsTF:
Users that are interested in IntegratedGradientsTF are comparing it to the libraries listed below
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers☆97Updated 6 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Layerwise Relevance Propagation with Deep Taylor Series in TensorFlow☆71Updated 8 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆42Updated 5 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆61Updated 5 years ago
- Towards Automatic Concept-based Explanations☆157Updated 10 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- ☆100Updated 6 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanation☆52Updated 6 years ago
- This repository is all about papers and tools of Explainable AI☆36Updated 5 years ago
- Tools for training explainable models using attribution priors.☆122Updated 3 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆130Updated 4 years ago
- ☆42Updated 4 years ago
- Self-Explaining Neural Networks☆39Updated 5 years ago
- On disentangling the menagerie of disentanglement papers☆27Updated 5 years ago
- The code snippets for the SW chapter of the "Interpretable AI" book.☆18Updated 5 years ago
- Visualizing Deep Neural Network Decisions: Prediction Difference Analysis☆117Updated 7 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- Code for AAAI 2018 accepted paper: "Beyond Sparsity: Tree Regularization of Deep Models for Interpretability"☆78Updated 7 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- ☆133Updated 5 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- ☆51Updated 4 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 5 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆333Updated 3 years ago
- TensorFlow implementation for SmoothGrad, Grad-CAM, Guided backprop, Integrated Gradients and other saliency techniques☆31Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆58Updated 3 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model’s Prediction. Thai Le, Suhang Wang, Dongwon …☆21Updated 4 years ago