giorgiovisani / lime_stability
☆33Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for lime_stability
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆72Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results o…☆29Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- bayesian lime☆16Updated 3 months ago
- ☆16Updated last year
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆57Updated last year
- Multi-Objective Counterfactuals☆40Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆232Updated 3 months ago
- 💡 Adversarial attacks on explanations and how to defend them☆299Updated 8 months ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 3 months ago
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆283Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆558Updated last week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆30Updated 7 months ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆196Updated 2 years ago
- Code and documentation for experiments in the TreeExplainer paper☆179Updated 5 years ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆9Updated last year
- Local explanations with uncertainty 💐!☆39Updated last year
- This repository is all about papers and tools of Explainable AI☆36Updated 4 years ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆40Updated 3 years ago
- For calculating Shapley values via linear regression.☆65Updated 3 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- Meaningful Local Explanation for Machine Learning Models☆41Updated last year
- A collection of algorithms of counterfactual explanations.☆50Updated 3 years ago
- For calculating global feature importance using Shapley values.☆253Updated this week
- Rule Extraction Methods for Interactive eXplainability☆41Updated 2 years ago