Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
☆77Mar 26, 2022Updated 4 years ago
Alternatives and similar repositories for Awesome-XAI-Evaluation
Users that are interested in Awesome-XAI-Evaluation are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30May 24, 2021Updated 4 years ago
- The official repository containing the source code to the explAIner publication.☆32Apr 29, 2024Updated 2 years ago
- An evaluation toolbox for machine learning explanations☆16Jan 7, 2024Updated 2 years ago
- Code/figures in Right for the Right Reasons☆57Dec 29, 2020Updated 5 years ago
- PyTorch code for WWW 19 paper: On Attribution of Recurrent Neural Network Predictions via Additive Decomposition☆11Mar 18, 2021Updated 5 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆854May 31, 2022Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆254Aug 17, 2024Updated last year
- Source code for the Joint Shapley values: a measure of joint feature importance☆12Sep 14, 2021Updated 4 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆72Jan 26, 2023Updated 3 years ago
- Awesome Explainable AI (XAI) and Interpretable ML Papers and Resources☆186May 4, 2021Updated 4 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Apr 5, 2022Updated 4 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆55Aug 17, 2022Updated 3 years ago
- Multi-Objective Counterfactuals☆43Jul 8, 2022Updated 3 years ago
- A collection of research materials on explainable AI/ML☆1,632Mar 7, 2026Updated last month
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Official source code for Time is Not Enough: Time-Frequency based Explanation for Time-Series Black-Box Models☆13Dec 5, 2024Updated last year
- ☆104Jul 6, 2023Updated 2 years ago
- This repository contains codes to explain One-Dimensional Convolutional Neural Networks (1D-CNN) using Layer-wise Relevance Propagation.☆13Aug 24, 2021Updated 4 years ago
- Code and Data for GlitchBench☆13Feb 27, 2024Updated 2 years ago
- ☆917Mar 19, 2023Updated 3 years ago
- 2019년 11월 4일 제93회 점자의 날을 기념하여 점자 번역기 ‘점자로’를 공개합니다.☆30Feb 28, 2025Updated last year
- A GitHub Action to run pyright☆12Dec 5, 2024Updated last year
- In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)☆40Mar 4, 2020Updated 6 years ago
- FeedbackQA: Improving Question Answering Post-Deployment with Interactive Feedback☆12Jul 13, 2022Updated 3 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Model Agnostic Explanations☆17Aug 12, 2019Updated 6 years ago
- ☆42Feb 2, 2024Updated 2 years ago
- "TSEvo: Counterfactuals for Time Series Classification" accepted at ICMLA '22.☆14Dec 22, 2022Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆659Apr 22, 2026Updated last week
- An Open-Source Library for the interpretability of time series classifiers☆143Nov 19, 2025Updated 5 months ago
- Anupam Datta, Matt Fredrikson, Klas Leino, Kaiji Lu, Shayak Sen, Zifan Wang☆18Feb 23, 2021Updated 5 years ago
- Supervised Local Modeling for Interpretability☆29Oct 27, 2018Updated 7 years ago
- Codebase, data and models for the Headline Grouping paper at NAACL2021☆12Oct 2, 2022Updated 3 years ago
- DSTC8-AVSD: Sentence generation task for Audio Visual Scene-aware Dialog☆14Jun 10, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆10Mar 29, 2021Updated 5 years ago
- DL Backtrace is a new explainablity technique for deep learning models that works for any modality and model type.☆25Apr 21, 2026Updated last week
- Towards Automatic Concept-based Explanations☆164May 1, 2024Updated last year
- helps to create custom polygon annotation tool☆14Jul 3, 2025Updated 9 months ago
- List of relevant resources for machine learning from explanatory supervision☆164Jul 14, 2025Updated 9 months ago
- Official code implementation for the paper "Interpreting Multivariate Shapley Interactions in DNNs" (AAAI 2021)☆33Mar 23, 2021Updated 5 years ago
- Code for the CubeRefine R-CNN model of our CVPRW '23 paper "Parcel3D: Shape Reconstruction From Single RGB Images for Applications in Tra…☆17Jul 12, 2023Updated 2 years ago