OpenXAI : Towards a Transparent Evaluation of Model Explanations
☆252Aug 17, 2024Updated last year
Alternatives and similar repositories for OpenXAI
Users that are interested in OpenXAI are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for paper: Are Large Language Models Post Hoc Explainers?☆34Jul 22, 2024Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆72Jan 26, 2023Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆649Mar 9, 2026Updated 2 weeks ago
- Code for paper [Explaining image classifiers by removing input features using generative models] [ACCV 2020] https://arxiv.org/abs/1910.0…☆15Nov 22, 2022Updated 3 years ago
- Explanation Optimization☆13Oct 16, 2020Updated 5 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆77Mar 26, 2022Updated 3 years ago
- Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"☆18Oct 17, 2022Updated 3 years ago
- GraphXAI: Resource to support the development and evaluation of GNN explainers☆207May 22, 2024Updated last year
- Code for the CVPR 2020 [ORAL] paper "SAM: The Sensitivity of Attribution Methods to Hyperparameters"☆27Dec 8, 2022Updated 3 years ago
- Local explanations with uncertainty 💐!☆42Aug 8, 2023Updated 2 years ago
- A Unified Approach to Evaluate and Compare Explainable AI methods☆14Jan 19, 2024Updated 2 years ago
- Generate Diverse Counterfactual Explanations for any machine learning model.☆1,501Jul 13, 2025Updated 8 months ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆738Feb 24, 2026Updated last month
- A python package for benchmarking interpretability techniques on Transformers.☆215Sep 29, 2024Updated last year
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆35Apr 23, 2024Updated last year
- OCEAN: Optimal Counterfactual Explanations in Tree Ensembles (ICML 2021)☆35Feb 16, 2026Updated last month
- ☆14Dec 4, 2023Updated 2 years ago
- Algorithms for explaining machine learning models☆2,621Oct 17, 2025Updated 5 months ago
- Counterfactual Explanation Based on Gradual Construction for Deep Networks Pytorch☆11Apr 7, 2021Updated 4 years ago
- The Conceptual Coverage Across Languages Benchmark for Text-to-Image Models☆12Oct 28, 2024Updated last year
- Explaining neural decisions contrastively to alternative decisions.☆24Mar 18, 2021Updated 5 years ago
- Interpretability and explainability of data and machine learning models☆1,771Updated this week
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Mar 22, 2023Updated 3 years ago
- The Recognizing, Exploring, and Articulating Limitations in Machine Learning research tool (REAL ML) is a set of guided activities to hel…☆52May 6, 2022Updated 3 years ago
- Code for NAACL 2022 paper "Reframing Human-AI Collaboration for Generating Free-Text Explanations"☆30Apr 28, 2023Updated 2 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Apr 23, 2022Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Dec 8, 2022Updated 3 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Nov 12, 2025Updated 4 months ago
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆17Aug 7, 2022Updated 3 years ago
- XAI - An eXplainability toolbox for machine learning☆1,231Nov 29, 2025Updated 3 months ago
- HIVE: Evaluating the Human Interpretability of Visual Explanations (ECCV 2022)☆22Jan 19, 2023Updated 3 years ago
- Python Library for Robustness Monitoring and Adversarial Debugging of NLP models☆15Dec 26, 2022Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Jul 19, 2021Updated 4 years ago
- Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch☆12Aug 30, 2024Updated last year
- Model interpretability and understanding for PyTorch☆5,584Updated this week
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆31Jul 21, 2025Updated 8 months ago
- eXplainable Machine Learning 2022 at MIM UW☆20Jul 1, 2023Updated 2 years ago
- ☆13Jul 26, 2023Updated 2 years ago
- Optimizers for performing approximate Bayesian inference on neural network parameters with Tensorflow and JAX☆13Feb 17, 2024Updated 2 years ago