AI4LIFE-GROUP / OpenXAI
OpenXAI : Towards a Transparent Evaluation of Model Explanations
☆245Updated 8 months ago
Alternatives and similar repositories for OpenXAI:
Users that are interested in OpenXAI are comparing it to the libraries listed below
- For calculating global feature importance using Shapley values.☆268Updated last week
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆598Updated 2 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆65Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆208Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- Local explanations with uncertainty 💐!☆40Updated last year
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- Neural Additive Models (Google Research)☆28Updated last year
- For calculating Shapley values via linear regression.☆67Updated 3 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆314Updated 5 months ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆288Updated last year
- A Natural Language Interface to Explainable Boosting Machines☆66Updated 10 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆125Updated 10 months ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- Codebase for information theoretic shapley values to explain predictive uncertainty.This repo contains the code related to the paperWatso…☆21Updated 10 months ago
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆74Updated 7 months ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated 2 weeks ago
- A repo for transfer learning with deep tabular models☆102Updated 2 years ago
- An amortized approach for calculating local Shapley value explanations☆97Updated last year
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 4 months ago
- A benchmark for distribution shift in tabular data☆52Updated 10 months ago
- Datasets derived from US census data☆261Updated 11 months ago
- A framework for prototyping and benchmarking imputation methods☆183Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆45Updated 2 weeks ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆224Updated 9 months ago