ZhengzeZhou / slime
☆17Updated last year
Alternatives and similar repositories for slime:
Users that are interested in slime are comparing it to the libraries listed below
- bayesian lime☆17Updated 9 months ago
- An Empirical Framework for Domain Generalization In Clinical Settings☆30Updated 3 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- A repo for transfer learning with deep tabular models☆102Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- Local explanations with uncertainty 💐!☆40Updated last year
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ☆33Updated 10 months ago
- A benchmark for distribution shift in tabular data☆52Updated 11 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- A Python framework for the quantitative evaluation of eXplainable AI methods☆17Updated 2 years ago
- Combating hidden stratification with GEORGE☆63Updated 3 years ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- ☆9Updated 2 years ago
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆30Updated 2 years ago
- Code for "Consistent Estimators for Learning to Defer to an Expert" (ICML 2020)☆13Updated 2 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆61Updated last month
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information☆33Updated 3 years ago
- ☆46Updated 4 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- Active and Sample-Efficient Model Evaluation☆24Updated 4 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆208Updated 2 years ago
- NumPy library for calibration metrics☆72Updated 2 months ago
- A collection of algorithms of counterfactual explanations.☆50Updated 4 years ago
- Implementation of SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption in Pytorch, a model learning a representati…☆77Updated last year
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year
- Code for the paper "Model Agnostic Interpretability for Multiple Instance Learning".☆13Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆65Updated 2 years ago