giorgiovisani / lime_stabilityLinks
☆33Updated last year
Alternatives and similar repositories for lime_stability
Users that are interested in lime_stability are comparing it to the libraries listed below
Sorting:
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆74Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- Repository for the results of my master thesis, about the generation and evaluation of synthetic data using GANs☆45Updated 2 years ago
- Responsible AI knowledge base☆107Updated 2 years ago
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results o…☆28Updated 2 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆294Updated 2 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆328Updated 10 months ago
- ☆18Updated 2 years ago
- GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset☆18Updated 3 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆220Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆88Updated 3 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆45Updated 4 months ago
- ☆207Updated 4 years ago
- Explainable AI with Python, published by Packt☆164Updated last month
- Multi-Objective Counterfactuals☆42Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆627Updated 3 months ago
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆30Updated 4 years ago
- A Python framework for the quantitative evaluation of eXplainable AI methods☆17Updated 2 years ago
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆29Updated 6 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆218Updated 3 years ago
- Neural Additive Models (Google Research)☆30Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆33Updated 5 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 4 years ago
- Generating Tabular Synthetic Data using State of the Art GAN architecture☆80Updated 5 years ago
- A visual analytic system for fair data-driven decision making☆26Updated 2 years ago
- Benchmarking synthetic data generation methods.☆281Updated this week