XAITK / xaitk-saliency
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
β85Updated this week
Related projects β
Alternatives and complementary repositories for xaitk-saliency
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β57Updated last year
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)β9Updated last year
- Visualization toolkit for learned features of neural networks in PyTorch. Feature Visualizer, Saliency Map, Guided Gradients, Grad-CAM, Dβ¦β41Updated 3 years ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationaβ¦β44Updated 4 years ago
- Detect model's attentionβ156Updated 4 years ago
- ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021β90Updated 2 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.β36Updated 3 years ago
- Meaningfully debugging model mistakes with conceptual counterfactual explanations. ICML 2022β75Updated 2 years ago
- PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)β60Updated 9 months ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from tβ¦β76Updated 2 years ago
- Self-Supervised Learning in PyTorchβ130Updated 8 months ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"β51Updated 2 years ago
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.β40Updated 2 years ago
- β117Updated 2 years ago
- Uncertainty-aware representation learning (URL) benchmarkβ98Updated 8 months ago
- trying to obtain uncertainties from training accuracies using timmβ9Updated 2 years ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β27Updated 2 years ago
- β13Updated last year
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'β18Updated 2 years ago
- 'Robust Semantic Interpretability: Revisiting Concept Activation Vectors' Official Implementationβ11Updated 4 years ago
- Repository for implementation of active learning and semi-supervised learning algorithms and applying them to medical imaging datasetsβ16Updated 3 years ago
- [MIDL 2023] Official Imeplementation of "Making Your First Choice: To Address Cold Start Problem in Vision Active Learning"β33Updated last year
- A PyTorch toolkit with 8 popular deep active learning query methods implemented.β83Updated 3 years ago
- [NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zβ¦β125Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ73Updated 2 years ago
- Recycling diverse modelsβ44Updated last year
- TensorFlow implementation for SmoothGrad, Grad-CAM, Guided backprop, Integrated Gradients and other saliency techniquesβ31Updated 3 years ago
- Overlooked Factors in Concept-based Explanations: Dataset Choice, Concept Learnability, and Human Capability (CVPR 2023)β9Updated last year
- XAI Experiments on an Annotated Dataset of Wild Bee Imagesβ17Updated 5 months ago
- β34Updated 2 years ago