XAITK / xaitk-saliency
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
β91Updated 3 weeks ago
Alternatives and similar repositories for xaitk-saliency:
Users that are interested in xaitk-saliency are comparing it to the libraries listed below
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β62Updated last year
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)β10Updated 2 years ago
- PyTorch-centric library for evaluating and enhancing the robustness of AI technologiesβ54Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated 11 months ago
- Uncertainty-aware representation learning (URL) benchmarkβ102Updated 3 weeks ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β28Updated 2 years ago
- Visualization toolkit for learned features of neural networks in PyTorch. Feature Visualizer, Saliency Map, Guided Gradients, Grad-CAM, Dβ¦β42Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ63Updated 2 years ago
- Detect model's attentionβ165Updated 4 years ago
- π Overcomplete is a Vision-based SAE Toolboxβ48Updated 2 weeks ago
- Contains notebooks for the PAR tutorial at CVPR 2021.β36Updated 3 years ago
- Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"β26Updated 5 years ago
- Methods for creating saliency maps for computer vision models.β42Updated 8 months ago
- Automatic identification of regions in the latent space of a model that correspond to unique concepts, namely to concepts with a semanticβ¦β13Updated last year
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weigβ¦β22Updated last year
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from tβ¦β78Updated 2 years ago
- Alpha version of our data-centric visual benchmark for training data selectionβ16Updated last year
- A benchmark of data-centric tasks from across the machine learning lifecycle.β72Updated 2 years ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.β71Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ243Updated 7 months ago
- π οΈ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSetβ182Updated 2 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.β68Updated last year
- β120Updated 3 years ago
- π‘ Adversarial attacks on explanations and how to defend themβ313Updated 4 months ago
- Data Augmentation with Variational Autoencoders (TPAMI)β140Updated 2 years ago
- Meaningfully debugging model mistakes with conceptual counterfactual explanations. ICML 2022β75Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ595Updated 2 months ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationaβ¦β45Updated 4 years ago
- β137Updated last year
- Download papers pdfs and other info from main AI conferencesβ21Updated 2 weeks ago