XAITK / xaitk-saliency
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
β91Updated last week
Alternatives and similar repositories for xaitk-saliency
Users that are interested in xaitk-saliency are comparing it to the libraries listed below
Sorting:
- Detect model's attentionβ165Updated 4 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β62Updated last year
- Visualization toolkit for learned features of neural networks in PyTorch. Feature Visualizer, Saliency Map, Guided Gradients, Grad-CAM, Dβ¦β41Updated 3 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ126Updated 11 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationaβ¦β45Updated 4 years ago
- A PyTorch implementation of D-RISEβ37Updated 3 years ago
- π Aligning Human & Machine Vision using explainabilityβ52Updated last year
- Towards Automatic Concept-based Explanationsβ159Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ246Updated 9 months ago
- π Overcomplete is a Vision-based SAE Toolboxβ53Updated last month
- [CogSci'21] Study of human inductive biases in CNNs and Transformers.β43Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ210Updated 2 years ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weigβ¦β22Updated 2 years ago
- Meaningfully debugging model mistakes with conceptual counterfactual explanations. ICML 2022β75Updated 2 years ago
- Official PyTorch implementation of improved B-cos modelsβ47Updated last year
- Contains notebooks for the PAR tutorial at CVPR 2021.β36Updated 3 years ago
- PyTorch-centric library for evaluating and enhancing the robustness of AI technologiesβ55Updated last year
- π Xplique is a Neural Networks Explainability Toolboxβ688Updated 7 months ago
- Self-Supervised Learning in PyTorchβ137Updated last year
- Pytorch implementation of various neural network interpretability methodsβ117Updated 3 years ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- π οΈ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSetβ184Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ599Updated 3 months ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from tβ¦β78Updated 3 years ago
- B-cos Networks: Alignment is All we Need for Interpretabilityβ109Updated last year
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)β10Updated 2 years ago
- This repository is all about papers and tools of Explainable AIβ36Updated 5 years ago
- β16Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 3 years ago