tml-tuebingen / nshap
Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.
☆17Updated last year
Alternatives and similar repositories for nshap:
Users that are interested in nshap are comparing it to the libraries listed below
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆60Updated last year
- Fast and incremental explanations for online machine learning models. Works best with the river framework.☆52Updated 3 weeks ago
- A Natural Language Interface to Explainable Boosting Machines☆62Updated 6 months ago
- Influence Estimation for Gradient-Boosted Decision Trees☆26Updated 7 months ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 5 months ago
- Mixture of Decision Trees for Interpretable Machine Learning☆11Updated 3 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆40Updated 2 years ago
- Testing Language Models for Memorization of Tabular Datasets.☆32Updated this week
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆237Updated 5 months ago
- For calculating Shapley values via linear regression.☆66Updated 3 years ago
- A fairness library in PyTorch.☆26Updated 5 months ago
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"☆43Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆75Updated last year
- Achieve error-rate fairness between societal groups for any score-based classifier.☆15Updated 8 months ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- A practical Active Learning python package with a strong focus on experiments.☆51Updated 2 years ago
- A collection of algorithms of counterfactual explanations.☆50Updated 3 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆32Updated 9 months ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆60Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆38Updated 8 months ago
- Extending Conformal Prediction to LLMs☆60Updated 6 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆81Updated 2 years ago
- ☆72Updated 3 months ago
- Editing machine learning models to reflect human knowledge and values☆123Updated last year
- A Python Package providing two algorithms, DAME and FLAME, for fast and interpretable treatment-control matches of categorical data☆56Updated 7 months ago
- [Experimental] Global causal discovery algorithms☆95Updated last week