Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).
☆992Mar 20, 2024Updated last year
Alternatives and similar repositories for saliency
Users that are interested in saliency are comparing it to the libraries listed below
Sorting:
- ☆113Nov 21, 2022Updated 3 years ago
- SmoothGrad implementation in PyTorch☆172Apr 4, 2021Updated 4 years ago
- A collection of infrastructure and tools for research in neural network interpretability.☆4,704Feb 6, 2023Updated 3 years ago
- Implementations of some popular Saliency Maps in Keras☆166May 11, 2019Updated 6 years ago
- Code for the TCAV ML interpretability project☆652Feb 5, 2026Updated 3 weeks ago
- Model interpretability and understanding for PyTorch☆5,560Updated this week
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆762Aug 25, 2020Updated 5 years ago
- An adversarial example library for constructing attacks, building defenses, and benchmarking both☆6,412Apr 10, 2024Updated last year
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated 10 months ago
- SalGAN: Visual Saliency Prediction with Generative Adversarial Networks☆380Dec 7, 2022Updated 3 years ago
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆338Nov 30, 2021Updated 4 years ago
- Visualizations for machine learning datasets☆7,368May 24, 2023Updated 2 years ago
- Neural network visualization toolkit for keras☆2,996Feb 7, 2022Updated 4 years ago
- An implementation of Grad-CAM with keras☆666Apr 8, 2019Updated 6 years ago
- Visualizing Deep Neural Network Decisions: Prediction Difference Analysis☆122Oct 31, 2017Updated 8 years ago
- [ICCV 2017] Torch code for Grad-CAM☆1,629Sep 17, 2022Updated 3 years ago
- Tutorials and implementations for "Self-normalizing networks"☆1,589Dec 9, 2025Updated 2 months ago
- Training Very Deep Neural Networks Without Skip-Connections☆589Jun 9, 2018Updated 7 years ago
- A generalized gradient-based CNN visualization technique☆297Apr 17, 2019Updated 6 years ago
- Pytorch implementation of convolutional neural network visualization techniques☆8,200Jan 1, 2025Updated last year
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆349Jul 22, 2020Updated 5 years ago
- Gradient-weighted Class Activation Mapping (Grad-CAM) Demo☆110Aug 13, 2018Updated 7 years ago
- Improving Convolutional Networks via Attention Transfer (ICLR 2017)☆1,465Jul 11, 2018Updated 7 years ago
- Visualising predictions of deep neural networks☆100May 25, 2018Updated 7 years ago
- Interpretability Methods for tf.keras models with Tensorflow 2.x☆1,036Jun 3, 2024Updated last year
- Implementation of Grad CAM in tensorflow☆250Aug 9, 2022Updated 3 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆644Feb 23, 2022Updated 4 years ago
- Network Dissection http://netdissect.csail.mit.edu for quantifying interpretability of deep CNNs.☆453Aug 25, 2018Updated 7 years ago
- Lime: Explaining the predictions of any machine learning classifier☆12,101Jul 25, 2024Updated last year
- Variational autoencoder in Theano☆12Sep 14, 2017Updated 8 years ago
- Shallow and Deep Convolutional Networks for Saliency Prediction☆188Dec 10, 2019Updated 6 years ago
- Detect model's attention☆170Jul 2, 2020Updated 5 years ago
- DeepVis Toolbox☆4,057Jan 13, 2020Updated 6 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Jul 24, 2022Updated 3 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆32Sep 25, 2019Updated 6 years ago
- PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM☆1,265Feb 12, 2022Updated 4 years ago
- Towards Automatic Concept-based Explanations☆162May 1, 2024Updated last year
- Image augmentation for machine learning experiments.☆14,731Jul 30, 2024Updated last year
- A Keras implementation of CapsNet in NIPS2017 paper "Dynamic Routing Between Capsules". Now test error = 0.34%.☆2,461May 19, 2020Updated 5 years ago