giorgiovisani / lime_stabilityLinks
☆33Updated last year
Alternatives and similar repositories for lime_stability
Users that are interested in lime_stability are comparing it to the libraries listed below
Sorting:
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆74Updated 3 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆295Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Responsible AI knowledge base☆108Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆628Updated 3 months ago
- ☆211Updated 4 years ago
- Python package for tackling multi-class imbalance problems. http://www.cs.put.poznan.pl/mlango/publications/multiimbalance/☆79Updated last year
- GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset☆18Updated 3 years ago
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆220Updated 2 years ago
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆30Updated 4 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆45Updated 5 months ago
- 💡 Adversarial attacks on explanations and how to defend them☆328Updated 11 months ago
- Multi-Objective Counterfactuals☆42Updated 3 years ago
- Explainable AI with Python, published by Packt☆165Updated 2 months ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆266Updated last month
- In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results o…☆28Updated 2 years ago
- This repository contains the code for all figures in the paper "General Pitfalls of Model-agnostic Interpretation Methods for Machine Lea…☆15Updated 4 years ago
- Repository for the results of my master thesis, about the generation and evaluation of synthetic data using GANs☆45Updated 2 years ago
- This repository is all about papers and tools of Explainable AI☆36Updated 5 years ago
- ☆18Updated 2 years ago
- Meaningful Local Explanation for Machine Learning Models☆42Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 4 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- A Natural Language Interface to Explainable Boosting Machines☆69Updated last year
- Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification☆97Updated last year
- For calculating global feature importance using Shapley values.☆279Updated last week