Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
☆77Mar 26, 2022Updated 4 years ago
Alternatives and similar repositories for Awesome-XAI-Evaluation
Users that are interested in Awesome-XAI-Evaluation are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- HIVE: Evaluating the Human Interpretability of Visual Explanations (ECCV 2022)☆22Jan 19, 2023Updated 3 years ago
- A Unified Approach to Evaluate and Compare Explainable AI methods☆14Jan 19, 2024Updated 2 years ago
- The official repository containing the source code to the explAIner publication.☆32Apr 29, 2024Updated last year
- Counterfactual SHAP: a framework for counterfactual feature importance☆21Jul 6, 2023Updated 2 years ago
- Code/figures in Right for the Right Reasons☆57Dec 29, 2020Updated 5 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanat…☆14Jul 24, 2024Updated last year
- PyTorch code for WWW 19 paper: On Attribution of Recurrent Neural Network Predictions via Additive Decomposition☆11Mar 18, 2021Updated 5 years ago
- Explanation Optimization☆13Oct 16, 2020Updated 5 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆851May 31, 2022Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Aug 17, 2024Updated last year
- Source code for the Joint Shapley values: a measure of joint feature importance☆12Sep 14, 2021Updated 4 years ago
- ☆52Aug 29, 2020Updated 5 years ago
- Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.☆17Jun 13, 2022Updated 3 years ago
- A collection of research materials on explainable AI/ML☆1,627Mar 7, 2026Updated last month
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Invertible Concept-based Explanation (ICE)☆19Oct 29, 2025Updated 5 months ago
- Official source code for Time is Not Enough: Time-Frequency based Explanation for Time-Series Black-Box Models☆12Dec 5, 2024Updated last year
- Experiment with maintaining a living vocabulary☆15Aug 25, 2015Updated 10 years ago
- A PyTorch implementation of learning shapelets from the paper Grabocka et al., „Learning Time-Series Shapelets“.☆69Mar 3, 2022Updated 4 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Dec 8, 2022Updated 3 years ago
- ☆104Jul 6, 2023Updated 2 years ago
- This repository contains codes to explain One-Dimensional Convolutional Neural Networks (1D-CNN) using Layer-wise Relevance Propagation.☆13Aug 24, 2021Updated 4 years ago
- ☆917Mar 19, 2023Updated 3 years ago
- Automated Machine Learning (AutoML) for Kaggle Competition☆32Jul 6, 2023Updated 2 years ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- Detect model's attention☆174Jul 2, 2020Updated 5 years ago
- In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)☆40Mar 4, 2020Updated 6 years ago
- FeedbackQA: Improving Question Answering Post-Deployment with Interactive Feedback☆12Jul 13, 2022Updated 3 years ago
- ☆42Feb 2, 2024Updated 2 years ago
- ICCV'2023: Holistic Label Correction for Noisy Multi-Label Classification☆13Oct 29, 2023Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆656Mar 9, 2026Updated last month
- Repository for the paper "Benchmarking and Survey of Explanation Methods for Black Box Models"☆18Jun 28, 2022Updated 3 years ago
- ☆44May 17, 2020Updated 5 years ago
- An Open-Source Library for the interpretability of time series classifiers☆143Nov 19, 2025Updated 4 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Supervised Local Modeling for Interpretability☆29Oct 27, 2018Updated 7 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Nov 12, 2025Updated 4 months ago
- Explanation by Progressive Exaggeration☆20Nov 21, 2022Updated 3 years ago
- ☆10Mar 29, 2021Updated 5 years ago
- DL Backtrace is a new explainablity technique for deep learning models that works for any modality and model type.☆25Updated this week
- ☆16Nov 16, 2025Updated 4 months ago
- List of relevant resources for machine learning from explanatory supervision☆163Jul 14, 2025Updated 8 months ago