AI4LIFE-GROUP / OpenXAI
OpenXAI : Towards a Transparent Evaluation of Model Explanations
☆237Updated 5 months ago
Alternatives and similar repositories for OpenXAI:
Users that are interested in OpenXAI are comparing it to the libraries listed below
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆61Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆578Updated 2 months ago
- For calculating global feature importance using Shapley values.☆260Updated this week
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Local explanations with uncertainty 💐!☆39Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆308Updated last month
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆33Updated 9 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆122Updated 7 months ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆201Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆77Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆209Updated 6 months ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆287Updated last year
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆71Updated 3 months ago
- A framework for prototyping and benchmarking imputation methods☆172Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 2 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆150Updated 8 months ago
- Neural Additive Models (Google Research)☆26Updated 9 months ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆661Updated 3 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆31Updated last month
- An amortized approach for calculating local Shapley value explanations☆94Updated last year
- bayesian lime☆17Updated 5 months ago
- For calculating Shapley values via linear regression.☆66Updated 3 years ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.☆185Updated 3 years ago
- Resources for Machine Learning Explainability☆73Updated 4 months ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 6 months ago
- Neural Additive Models (Google Research)☆69Updated 3 years ago
- Concept Bottleneck Models, ICML 2020☆185Updated last year
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆118Updated last month