giorgiovisani / lime_stability
☆32Updated 2 months ago
Related projects: ⓘ
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆69Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆75Updated last year
- In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results o…☆28Updated last year
- All about explainable AI, algorithmic fairness and more☆107Updated 11 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆227Updated last month
- ☆16Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- bayesian lime☆16Updated last month
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆191Updated 2 years ago
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆218Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆526Updated last month
- Model Agnostic Counterfactual Explanations☆86Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆291Updated 6 months ago
- Overview of different model interpretability libraries.☆43Updated 2 years ago
- Code associated with my Interpretable AI Book (https://www.manning.com/books/interpretable-ai)☆56Updated 2 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆274Updated 11 months ago
- Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification☆95Updated 7 months ago
- A lightweight implementation of removal-based explanations for ML models.☆56Updated 3 years ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆254Updated last month
- ☆176Updated 3 years ago
- Evaluate real and synthetic datasets against each other☆78Updated 2 weeks ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆9Updated last year
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated last month
- Multi-Objective Counterfactuals☆40Updated 2 years ago
- A Python framework for the quantitative evaluation of eXplainable AI methods☆16Updated last year
- A repo for transfer learning with deep tabular models☆100Updated last year
- This repository is all about papers and tools of Explainable AI☆35Updated 4 years ago
- Explainable AI with Python, published by Packt☆151Updated last year
- For calculating global feature importance using Shapley values.☆244Updated this week