MarcelRobeer / explaboxLinks
Explore/examine/explain/expose your model with the explabox!
☆16Updated last month
Alternatives and similar repositories for explabox
Users that are interested in explabox are comparing it to the libraries listed below
Sorting:
- Explainable Artificial Intelligence through Contextual Importance and Utility☆28Updated 10 months ago
- A Natural Language Interface to Explainable Boosting Machines☆67Updated 11 months ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆68Updated 2 years ago
- Missing data amputation and exploration functions for Python☆71Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated 3 weeks ago
- Testing Language Models for Memorization of Tabular Datasets.☆33Updated 4 months ago
- ☆37Updated 4 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 6 months ago
- ☆50Updated 2 years ago
- ☆44Updated 3 weeks ago
- Repository for the explanation method Calibrated Explanations (CE)☆67Updated last month
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- For calculating Shapley values via linear regression.☆68Updated 4 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆42Updated 3 months ago
- Mixture of Decision Trees for Interpretable Machine Learning☆11Updated 3 years ago
- ☆17Updated 10 months ago
- Train Gradient Boosting models that are both high-performance *and* Fair!☆105Updated last year
- PyTorch Explain: Interpretable Deep Learning in Python.☆156Updated last year
- Editing machine learning models to reflect human knowledge and values☆126Updated last year
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆220Updated last year
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- CinnaMon is a Python library which offers a number of tools to detect, explain, and correct data drift in a machine learning system☆77Updated 2 years ago
- Extending Conformal Prediction to LLMs☆66Updated last year
- Rule Extraction Methods for Interactive eXplainability☆43Updated 3 years ago
- Code for paper: Are Large Language Models Post Hoc Explainers?☆33Updated 11 months ago
- A toolbox for fair and explainable machine learning☆55Updated last year
- Measuring data importance over ML pipelines using the Shapley value.☆42Updated last month