x-y-zhao / BayLime
bayesian lime
β17Updated 6 months ago
Alternatives and similar repositories for BayLime:
Users that are interested in BayLime are comparing it to the libraries listed below
- β16Updated last year
- Local explanations with uncertainty π!β39Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β82Updated 2 years ago
- Model Agnostic Counterfactual Explanationsβ86Updated 2 years ago
- A collection of algorithms of counterfactual explanations.β50Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ62Updated 2 years ago
- β11Updated 4 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β73Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ203Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ33Updated 9 months ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 2 years ago
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- A benchmark for distribution shift in tabular dataβ50Updated 8 months ago
- A fairness library in PyTorch.β27Updated 6 months ago
- Generate robust counterfactual explanations for machine learning modelsβ14Updated last year
- An Empirical Framework for Domain Generalization In Clinical Settingsβ29Updated 2 years ago
- This repository provides details of the experimental code in the paper: Instance-based Counterfactual Explanations for Time Series Classiβ¦β18Updated 3 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"β52Updated 2 years ago
- β12Updated 2 years ago
- A toolkit for quantitative evaluation of data attribution methods.β39Updated this week
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ239Updated 5 months ago
- Neural Additive Models (Google Research)β69Updated 3 years ago
- Model-agnostic posthoc calibration without distributional assumptionsβ42Updated last year
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"β43Updated 2 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β127Updated 3 years ago
- Counterfactual Explanations for Multivariate Time Series Dataβ31Updated 11 months ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolboxβ42Updated 6 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ52Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 forβ¦β25Updated 2 years ago