epfl-ml4ed / evaluating-explainers
Comparing 5 different XAI techniques (LIME, PermSHAP, KernelSHAP, DiCE, CEM) through quantitative metrics. Published at EDM 2022.
☆17Updated 2 years ago
Alternatives and similar repositories for evaluating-explainers:
Users that are interested in evaluating-explainers are comparing it to the libraries listed below
- Rule Extraction Methods for Interactive eXplainability☆43Updated 2 years ago
- For calculating Shapley values via linear regression.☆67Updated 3 years ago
- Benchmark time series data sets for PyTorch☆35Updated last year
- A collection of algorithms of counterfactual explanations.☆50Updated 4 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆245Updated 8 months ago
- ☆18Updated 3 years ago
- Overview of different model interpretability libraries.☆48Updated 2 years ago
- Conformal Histogram Regression: efficient conformity scores for non-parametric regression problems☆22Updated 3 years ago
- Multiple Generalized Additive Models implemented in Python (EBM, XGB, Spline, FLAM). Code for our KDD 2021 paper "How Interpretable and T…☆12Updated 3 years ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated last week
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- An Open-Source Library for the interpretability of time series classifiers☆133Updated 5 months ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆66Updated 2 years ago
- ☆31Updated 3 years ago
- A Natural Language Interface to Explainable Boosting Machines☆66Updated 9 months ago
- Dynamic causal Bayesian optimisation☆36Updated 2 years ago
- Extending Conformal Prediction to LLMs☆66Updated 10 months ago
- ☆16Updated 2 years ago
- Code associated with my Interpretable AI Book (https://www.manning.com/books/interpretable-ai)☆60Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Official Code for the paper: "Composite Feature Selection using Deep Ensembles"☆22Updated 2 years ago
- Code for paper: Are Large Language Models Post Hoc Explainers?☆31Updated 9 months ago
- TimeSHAP explains Recurrent Neural Network predictions.☆173Updated last year
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"☆45Updated 2 years ago
- Neural Additive Models (Google Research)☆69Updated 3 years ago
- Influence Estimation for Gradient-Boosted Decision Trees☆27Updated 10 months ago
- ☆33Updated 10 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆64Updated 2 years ago