XAITK / xaitk-saliencyLinks
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
☆92Updated last month
Alternatives and similar repositories for xaitk-saliency
Users that are interested in xaitk-saliency are comparing it to the libraries listed below
Sorting:
- Detect model's attention☆166Updated 4 years ago
- Visualization toolkit for learned features of neural networks in PyTorch. Feature Visualizer, Saliency Map, Guided Gradients, Grad-CAM, D…☆42Updated 4 years ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆65Updated last year
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆79Updated 3 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.☆36Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆212Updated 3 years ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationa…☆45Updated 4 years ago
- The official code of Relevance-CAM☆45Updated last year
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆185Updated 2 years ago
- Code for paper "AI for radiographic COVID-19 detection selects shortcuts over signal"☆29Updated 4 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆35Updated last year
- This repository is all about papers and tools of Explainable AI☆36Updated 5 years ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆9Updated 2 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆57Updated last year
- ☆120Updated 3 years ago
- Automatic identification of regions in the latent space of a model that correspond to unique concepts, namely to concepts with a semantic…☆14Updated last year
- A toolkit for quantitative evaluation of data attribution methods.☆48Updated this week
- Making Heads or Tails Towards Semantically Consistent Visual Counterfactuals☆30Updated 2 years ago
- PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures (CVPR 2022)☆108Updated 2 years ago
- Towards Automatic Concept-based Explanations☆159Updated last year
- Integrated Grad-CAM (submitted to ICASSP2021 conference)☆19Updated 4 years ago
- XAI Experiments on an Annotated Dataset of Wild Bee Images☆19Updated 6 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 10 months ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 3 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆137Updated 4 years ago
- Meaningfully debugging model mistakes with conceptual counterfactual explanations. ICML 2022☆75Updated 2 years ago
- Visualizer for PyTorch image models☆44Updated 4 years ago
- ☆16Updated last year
- Uncertainty-aware representation learning (URL) benchmark☆105Updated 3 months ago