hiranumn / IntegratedGradientsTF
Tensorflow implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks". It explains connections between two tensors.
β16Updated 5 years ago
Alternatives and similar repositories for IntegratedGradientsTF:
Users that are interested in IntegratedGradientsTF are comparing it to the libraries listed below
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β128Updated 3 years ago
- Implementation of Layerwise Relevance Propagation for heatmapping "deep" layersβ97Updated 6 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)β61Updated 5 years ago
- DeepCover: Uncover the truth behind AIβ32Updated 9 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β127Updated 3 years ago
- This repository is all about papers and tools of Explainable AIβ36Updated 5 years ago
- Papers on interpretable deep learning, for reviewβ29Updated 7 years ago
- β99Updated 6 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorchβ42Updated 5 years ago
- β50Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.β57Updated 3 years ago
- IBD: Interpretable Basis Decomposition for Visual Explanationβ52Updated 6 years ago
- β61Updated last year
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β130Updated 4 years ago
- How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methodsβ23Updated 4 years ago
- Code/figures in Right for the Right Reasonsβ55Updated 4 years ago
- Tools for training explainable models using attribution priors.β120Updated 3 years ago
- Layerwise Relevance Propagation with Deep Taylor Series in TensorFlowβ71Updated 8 years ago
- Towards Automatic Concept-based Explanationsβ157Updated 8 months ago
- repo for "Decision explanation and feature importance for invertible networks"β13Updated 5 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentationβ42Updated 4 years ago
- In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)β40Updated 4 years ago
- β132Updated 5 years ago
- Implementation of Bayesian NNs in Pytorch (https://arxiv.org/pdf/1703.02910.pdf) (With some help from https://github.com/Riashat/Deep-Ba β¦β31Updated 3 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image dataβ30Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 forβ¦β25Updated 2 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".β30Updated 5 years ago
- β34Updated 4 years ago
- Autoencoder network for imputing missing valuesβ27Updated 5 years ago
- Visualizing Deep Neural Network Decisions: Prediction Difference Analysisβ117Updated 7 years ago