XAITK / xaitk-saliency
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
β89Updated last month
Alternatives and similar repositories for xaitk-saliency:
Users that are interested in xaitk-saliency are comparing it to the libraries listed below
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)β10Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ33Updated 10 months ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β62Updated last year
- π οΈ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSetβ183Updated 2 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.β36Updated 3 years ago
- The official code of Relevance-CAMβ41Updated 11 months ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 2 years ago
- ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021β94Updated 2 years ago
- Meaningfully debugging model mistakes with conceptual counterfactual explanations. ICML 2022β75Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ203Updated 2 years ago
- Automatic identification of regions in the latent space of a model that correspond to unique concepts, namely to concepts with a semanticβ¦β13Updated last year
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.β71Updated 2 years ago
- Detect model's attentionβ163Updated 4 years ago
- β25Updated 2 years ago
- Code for paper "AI for radiographic COVID-19 detection selects shortcuts over signal"β29Updated 3 years ago
- Methods for creating saliency maps for computer vision models.β40Updated 6 months ago
- ModelDiff: A Framework for Comparing Learning Algorithmsβ55Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ123Updated 8 months ago
- Visualizer for PyTorch image modelsβ44Updated 3 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.β68Updated last year
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β27Updated 2 years ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationaβ¦β45Updated 4 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'β18Updated 2 years ago
- Self-Supervised Learning in PyTorchβ133Updated 11 months ago
- Official PyTorch implementation of improved B-cos modelsβ45Updated 11 months ago
- Making Heads or Tails Towards Semantically Consistent Visual Counterfactualsβ30Updated 2 years ago
- Advances in Neural Information Processing Systems (NeurIPS 2021)β22Updated 2 years ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from tβ¦β77Updated 2 years ago
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatioβ¦β25Updated 3 years ago
- Implementation of the research paper of Cam which is the alternative to the current SOTAβ28Updated 2 years ago