SinaMohseni / Awesome-XAI-EvaluationView external linksLinks
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
☆77Mar 26, 2022Updated 3 years ago
Alternatives and similar repositories for Awesome-XAI-Evaluation
Users that are interested in Awesome-XAI-Evaluation are comparing it to the libraries listed below
Sorting:
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30May 24, 2021Updated 4 years ago
- HIVE: Evaluating the Human Interpretability of Visual Explanations (ECCV 2022)☆22Jan 19, 2023Updated 3 years ago
- A Unified Approach to Evaluate and Compare Explainable AI methods☆14Jan 19, 2024Updated 2 years ago
- Code/figures in Right for the Right Reasons☆57Dec 29, 2020Updated 5 years ago
- LLM benchmarks☆13Feb 22, 2024Updated last year
- Explanation Optimization☆13Oct 16, 2020Updated 5 years ago
- Code and Data for GlitchBench☆13Feb 27, 2024Updated last year
- This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanat…☆14Jul 24, 2024Updated last year
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Apr 5, 2022Updated 3 years ago
- A Toolbox for the Evaluation of machine learning Explanations☆16Jan 7, 2024Updated 2 years ago
- Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.☆17Jun 13, 2022Updated 3 years ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Jan 30, 2023Updated 3 years ago
- Invertible Concept-based Explanation (ICE)☆19Oct 29, 2025Updated 3 months ago
- In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)☆40Mar 4, 2020Updated 5 years ago
- Repository for the paper "Benchmarking and Survey of Explanation Methods for Black Box Models"☆18Jun 28, 2022Updated 3 years ago
- Research model for classification and feature extraction of dermatoscopic images☆23May 22, 2023Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Aug 17, 2024Updated last year
- A collection of research materials on explainable AI/ML☆1,612Dec 11, 2025Updated 2 months ago
- Concealed Data Poisoning Attacks on NLP Models☆21Sep 4, 2023Updated 2 years ago
- ☆51Aug 29, 2020Updated 5 years ago
- ☆916Mar 19, 2023Updated 2 years ago
- Detect model's attention☆170Jul 2, 2020Updated 5 years ago
- Efficient Toolbox to quickly evaluate SOC/SOD/COD benchmark.☆30Sep 15, 2022Updated 3 years ago
- ☆35Jun 22, 2021Updated 4 years ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30May 1, 2024Updated last year
- XAI Stories. Case studies for eXplainable Artificial Intelligence☆31Oct 14, 2020Updated 5 years ago
- ☆31Feb 25, 2022Updated 3 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Nov 12, 2025Updated 3 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Mar 22, 2021Updated 4 years ago
- Code for "Generative causal explanations of black-box classifiers"☆35Jan 15, 2021Updated 5 years ago
- Implementation of the paper "Shapley Explanation Networks"☆88Jan 16, 2021Updated 5 years ago
- 供大学生,竞赛生,高中生查找的math-wiki☆10May 26, 2022Updated 3 years ago
- [CVPR2024] Learning from Synthetic Human Group Activities☆14Feb 24, 2025Updated 11 months ago
- DOMAINEVAL is an auto-constructed benchmark for multi-domain code generation that consists of 2k+ subjects (i.e., description, reference …☆14Dec 12, 2024Updated last year
- Code for the paper "Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection"☆10Aug 7, 2023Updated 2 years ago
- ☆12Jan 11, 2026Updated last month
- Data and code for the paper COVID-Fact: Fact Extraction and Verification of Real-World Claims on COVID-19 Pandemic.☆39Feb 11, 2025Updated last year
- Detect wildfires using ML on images from cameras on vantage points☆11Oct 16, 2024Updated last year
- Pytorch Implementation of the Explainable Conditional Adversarial Autoencoder using Saliency Maps and SHAP (J. of Imaging - MDPI)☆12Mar 5, 2025Updated 11 months ago