XAITK / xaitk-saliency
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
β83Updated 2 weeks ago
Related projects β
Alternatives and complementary repositories for xaitk-saliency
- Detect model's attentionβ154Updated 4 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β56Updated last year
- β117Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ118Updated 5 months ago
- Self-Supervised Learning in PyTorchβ127Updated 7 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ30Updated 6 months ago
- A PyTorch implementation of D-RISEβ37Updated 3 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.β36Updated 3 years ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β27Updated 2 years ago
- PyTorch-centric library for evaluating and enhancing the robustness of AI technologiesβ51Updated 9 months ago
- Pytorch implementation of various neural network interpretability methodsβ111Updated 2 years ago
- Uncertainty-aware representation learning (URL) benchmarkβ98Updated 8 months ago
- π Xplique is a Neural Networks Explainability Toolboxβ641Updated last month
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Informationβ32Updated 2 years ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationaβ¦β44Updated 4 years ago
- Integrated Grad-CAM (submitted to ICASSP2021 conference)β19Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ196Updated 2 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'β17Updated 2 years ago
- π οΈ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSetβ182Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ232Updated 2 months ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ73Updated 2 years ago
- Visualization toolkit for learned features of neural networks in PyTorch. Feature Visualizer, Saliency Map, Guided Gradients, Grad-CAM, Dβ¦β41Updated 3 years ago
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.β40Updated 2 years ago
- Re-implementation of the StylEx paper, training a GAN to explain a classifier in StyleSpace, paper by Lang et al. (2021).β35Updated 11 months ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weigβ¦β21Updated last year
- PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)β59Updated 8 months ago
- Official PyTorch implementation of improved B-cos modelsβ42Updated 8 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ554Updated this week
- β13Updated last year