XAITK / xaitk-saliencyLinks
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
β91Updated last month
Alternatives and similar repositories for xaitk-saliency
Users that are interested in xaitk-saliency are comparing it to the libraries listed below
Sorting:
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β64Updated last year
- This repository is all about papers and tools of Explainable AIβ36Updated 5 years ago
- PyTorch-centric library for evaluating and enhancing the robustness of AI technologiesβ57Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 3 years ago
- Detect model's attentionβ165Updated 4 years ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationaβ¦β45Updated 4 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- Contains notebooks for the PAR tutorial at CVPR 2021.β36Updated 3 years ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from tβ¦β78Updated 3 years ago
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.β40Updated 3 years ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.β70Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ128Updated 11 months ago
- β120Updated 3 years ago
- Visualization toolkit for learned features of neural networks in PyTorch. Feature Visualizer, Saliency Map, Guided Gradients, Grad-CAM, Dβ¦β41Updated 4 years ago
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatioβ¦β27Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ212Updated 2 years ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)β10Updated 2 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β62Updated last week
- Integrated Grad-CAM (submitted to ICASSP2021 conference)β19Updated 4 years ago
- π οΈ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSetβ185Updated 2 years ago
- Self-Supervised Learning in PyTorchβ138Updated last year
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β72Updated 2 years ago
- Active and Sample-Efficient Model Evaluationβ24Updated 2 weeks ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ150Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ66Updated 2 years ago
- Towards Automatic Concept-based Explanationsβ159Updated last year
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'β18Updated 3 years ago
- β13Updated 2 years ago