rehmanzafar / dlime_experimentsLinks
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
☆28Updated last year
Alternatives and similar repositories for dlime_experiments
Users that are interested in dlime_experiments are comparing it to the libraries listed below
Sorting:
- ☆33Updated 11 months ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆72Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- ☆12Updated 2 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- All about explainable AI, algorithmic fairness and more☆109Updated last year
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated 2 years ago
- LOcal Rule-based Exlanations☆52Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- Meaningful Local Explanation for Machine Learning Models☆41Updated 2 years ago
- Create sparse and accurate risk scoring systems!☆37Updated 10 months ago
- ICML 2018: "Adversarial Time-to-Event Modeling"☆37Updated 6 years ago
- Repository of the paper "Defining Locality for Surrogates in Post-hoc Interpretablity" published at 2018 ICML Workshop on Human Interpret…☆17Updated 3 years ago
- Rule Extraction Methods for Interactive eXplainability☆43Updated 2 years ago
- A Python package for unwrapping ReLU DNNs☆70Updated last year
- Radial-Based Undersampling for Imbalanced Data Classification☆12Updated 6 years ago
- Fast Correlation-Based Feature Selection☆31Updated 8 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Guidelines for the responsible use of explainable AI and machine learning.☆17Updated 2 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated this week
- ☆17Updated last year
- Extended Complexity Library in R☆57Updated 4 years ago
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆30Updated 3 years ago
- P. Domingos proposed a principled method for making an arbitrary classifier cost-sensitive by wrapping a cost-minimizing procedure around…☆39Updated 6 years ago
- Code and documentation for experiments in the TreeExplainer paper☆186Updated 5 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆131Updated 4 years ago
- Surrogate Assisted Feature Extraction☆37Updated 3 years ago
- Seminar on Limitations of Interpretable Machine Learning Methods☆57Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago