iancovert / removal-explanations
A lightweight implementation of removal-based explanations for ML models.
☆59Updated 3 years ago
Alternatives and similar repositories for removal-explanations:
Users that are interested in removal-explanations are comparing it to the libraries listed below
- Repository for code release of paper "Robust Variational Autoencoders for Outlier Detection and Repair of Mixed-Type Data" (AISTATS 2020)☆50Updated 5 years ago
- For calculating Shapley values via linear regression.☆67Updated 3 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Neural Additive Models (Google Research)☆69Updated 3 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Codebase for INVASE: Instance-wise Variable Selection - 2019 ICLR☆60Updated 4 years ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆40Updated 2 years ago
- ☆59Updated 4 years ago
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"☆43Updated 2 years ago
- A benchmark for distribution shift in tabular data☆50Updated 9 months ago
- ☆125Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆63Updated 2 years ago
- Local explanations with uncertainty 💐!☆39Updated last year
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- A Natural Language Interface to Explainable Boosting Machines☆65Updated 8 months ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆30Updated last year
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆130Updated 4 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆43Updated 7 months ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- A collection of algorithms of counterfactual explanations.☆50Updated 3 years ago
- Tools for training explainable models using attribution priors.☆123Updated 4 years ago
- Rule Extraction Methods for Interactive eXplainability☆44Updated 2 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.☆51Updated 3 years ago
- Code for the Structural Agnostic Model (https://arxiv.org/abs/1803.04929)☆52Updated 4 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help …☆24Updated 2 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆27Updated 4 years ago