rehmanzafar / dlime_experiments
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
☆29Updated last year
Related projects ⓘ
Alternatives and complementary repositories for dlime_experiments
- ☆33Updated 5 months ago
- Multi-Objective Counterfactuals☆40Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆72Updated 2 years ago
- An implementation of the TREPAN algorithm in python. TREPAN extracts a decision tree from an ANN using a sampling method.☆19Updated 5 years ago
- Code and documentation for experiments in the TreeExplainer paper☆179Updated 5 years ago
- ☆16Updated last year
- GAMI-Net: Generalized Additive Models with Structured Interactions☆30Updated 2 years ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆9Updated last year
- Repository for the overview paper "Deep Learning for Survival Analysis: A Survey"☆22Updated 2 weeks ago
- A Python package for unwrapping ReLU DNNs☆70Updated 10 months ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- LOcal Rule-based Exlanations☆49Updated 11 months ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 3 months ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 2 years ago
- Explaining the output of machine learning models with more accurately estimated Shapley values☆146Updated this week
- For calculating global feature importance using Shapley values.☆253Updated this week
- ICML 2018: "Adversarial Time-to-Event Modeling"☆37Updated 6 years ago
- Generalized Optimal Sparse Decision Trees☆62Updated 8 months ago
- ☆12Updated 2 years ago
- Surrogate Assisted Feature Extraction☆36Updated 3 years ago
- Neural Additive Models (Google Research)☆26Updated 6 months ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated last year
- Extended Complexity Library in R☆57Updated 3 years ago
- ☆16Updated 3 months ago
- Python library for classifier calibration☆16Updated 6 months ago
- bayesian lime☆16Updated 3 months ago