x-y-zhao / BayLime
bayesian lime
β17Updated 9 months ago
Alternatives and similar repositories for BayLime:
Users that are interested in BayLime are comparing it to the libraries listed below
- Local explanations with uncertainty π!β40Updated last year
- β17Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β82Updated 2 years ago
- Code for the paper "Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers" published in ICLR 2019β13Updated 6 years ago
- A collection of algorithms of counterfactual explanations.β50Updated 4 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β73Updated 2 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"β31Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"β53Updated 3 years ago
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 2 years ago
- Model-agnostic posthoc calibration without distributional assumptionsβ42Updated last year
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- A benchmark for distribution shift in tabular dataβ52Updated 11 months ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ150Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 forβ¦β25Updated 3 years ago
- Model Agnostic Counterfactual Explanationsβ87Updated 2 years ago
- β11Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β127Updated 4 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.β15Updated 4 years ago
- β9Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ208Updated 2 years ago
- Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Controlβ66Updated 5 months ago
- Code for "Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties"β18Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ245Updated 8 months ago
- β12Updated 2 years ago
- Self-Explaining Neural Networksβ13Updated last year
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"β45Updated 2 years ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weigβ¦β22Updated last year
- Efficient Computation and Analysis of Distributional Shapley Values (AISTATS 2021)β21Updated last year
- LOcal Rule-based Exlanationsβ53Updated last year