Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
☆77Mar 26, 2022Updated 3 years ago
Alternatives and similar repositories for Awesome-XAI-Evaluation
Users that are interested in Awesome-XAI-Evaluation are comparing it to the libraries listed below
Sorting:
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30May 24, 2021Updated 4 years ago
- HIVE: Evaluating the Human Interpretability of Visual Explanations (ECCV 2022)☆22Jan 19, 2023Updated 3 years ago
- A Unified Approach to Evaluate and Compare Explainable AI methods☆14Jan 19, 2024Updated 2 years ago
- The official repository containing the source code to the explAIner publication.☆32Apr 29, 2024Updated last year
- Counterfactual SHAP: a framework for counterfactual feature importance☆21Jul 6, 2023Updated 2 years ago
- This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanat…☆14Jul 24, 2024Updated last year
- PyTorch code for WWW 19 paper: On Attribution of Recurrent Neural Network Predictions via Additive Decomposition☆11Mar 18, 2021Updated 5 years ago
- Explanation Optimization☆13Oct 16, 2020Updated 5 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆851May 31, 2022Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Aug 17, 2024Updated last year
- Guidelines for the responsible use of explainable AI and machine learning.☆17Jan 30, 2023Updated 3 years ago
- Concealed Data Poisoning Attacks on NLP Models☆21Sep 4, 2023Updated 2 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Apr 5, 2022Updated 3 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆55Aug 17, 2022Updated 3 years ago
- Multi-Objective Counterfactuals☆43Jul 8, 2022Updated 3 years ago
- ☆51Aug 29, 2020Updated 5 years ago
- Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.☆17Jun 13, 2022Updated 3 years ago
- XAI Stories. Case studies for eXplainable Artificial Intelligence☆31Oct 14, 2020Updated 5 years ago
- A collection of research materials on explainable AI/ML☆1,623Mar 7, 2026Updated last week
- Official source code for Time is Not Enough: Time-Frequency based Explanation for Time-Series Black-Box Models☆12Dec 5, 2024Updated last year
- ☆50Mar 24, 2023Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Dec 8, 2022Updated 3 years ago
- A PyTorch implementation of learning shapelets from the paper Grabocka et al., „Learning Time-Series Shapelets“.☆68Mar 3, 2022Updated 4 years ago
- This repository contains codes to explain One-Dimensional Convolutional Neural Networks (1D-CNN) using Layer-wise Relevance Propagation.☆13Aug 24, 2021Updated 4 years ago
- Code and Data for GlitchBench☆13Feb 27, 2024Updated 2 years ago
- ☆915Mar 19, 2023Updated 3 years ago
- Detect model's attention☆172Jul 2, 2020Updated 5 years ago
- In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)☆40Mar 4, 2020Updated 6 years ago
- Model Agnostic Explanations☆18Aug 12, 2019Updated 6 years ago
- ☆10Dec 8, 2023Updated 2 years ago
- ☆42Feb 2, 2024Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆647Mar 9, 2026Updated last week
- An Open-Source Library for the interpretability of time series classifiers☆144Nov 19, 2025Updated 4 months ago
- A rule-based aproach to explain the output of any machine learning model☆15Apr 4, 2024Updated last year
- Code for Dataset and Benchmarks Submission, Neurips 2022☆13Aug 16, 2022Updated 3 years ago
- Supervised Local Modeling for Interpretability☆29Oct 27, 2018Updated 7 years ago
- Codebase, data and models for the Headline Grouping paper at NAACL2021☆12Oct 2, 2022Updated 3 years ago
- Explanation by Progressive Exaggeration☆20Nov 21, 2022Updated 3 years ago
- Codebase used in the paper "Foundational Models for Continual Learning: An Empirical Study of Latent Replay".☆30Jan 24, 2023Updated 3 years ago