benedikthoeltgen / DeDUCELinks
☆8Updated 3 years ago
Alternatives and similar repositories for DeDUCE
Users that are interested in DeDUCE are comparing it to the libraries listed below
Sorting:
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 3 years ago
- ☆18Updated 3 years ago
- CME: Concept-based Model Extraction☆11Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- A straightforward implementation of EGBM-based Generalized Additive Model☆13Updated 4 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆31Updated 2 years ago
- Code for our paper☆13Updated 2 years ago
- A benchmark for distribution shift in tabular data☆53Updated last year
- Code for "Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties"☆18Updated 4 years ago
- Evaluating Robustness of Predictive Uncertainty Estimation: Are Dirichlet-based Models Reliable ? (ICML 2021)☆28Updated 2 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 3 years ago
- An Empirical Framework for Domain Generalization In Clinical Settings☆30Updated 3 years ago
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year
- Early exit ensembles☆12Updated 3 years ago
- Experiments on meta-learning algorithms to solve few-shot domain adaptation☆10Updated 3 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆63Updated last month
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated 3 weeks ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆34Updated last year
- ☆44Updated 5 years ago
- Local explanations with uncertainty 💐!☆40Updated last year
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831☆36Updated 2 years ago
- Code for the paper "Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers" published in ICLR 2019☆13Updated 6 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆29Updated 4 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆55Updated 2 years ago
- Code for the paper "Model Agnostic Interpretability for Multiple Instance Learning".☆13Updated 3 years ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆41Updated 2 years ago
- Rule Extraction Methods for Interactive eXplainability☆43Updated 3 years ago
- Repository for the NeurIPS 2023 paper "Beyond Confidence: Reliable Models Should Also Consider Atypicality"☆13Updated last year
- Active and Sample-Efficient Model Evaluation☆24Updated last month