amparore / leafLinks
A Python framework for the quantitative evaluation of eXplainable AI methods
☆17Updated 2 years ago
Alternatives and similar repositories for leaf
Users that are interested in leaf are comparing it to the libraries listed below
Sorting:
- ☆17Updated last year
- bayesian lime☆17Updated 10 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆22Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 9 months ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆62Updated last week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆210Updated 2 years ago
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆129Updated 11 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Neural Additive Models (Google Research)☆70Updated 3 years ago
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆30Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- A fairness library in PyTorch.☆29Updated 10 months ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- Uncertainty-aware classification.☆16Updated 2 years ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆96Updated 2 months ago
- ☆12Updated last year
- Adversarial Black box Explainer generating Latent Exemplars☆12Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆150Updated 3 years ago
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layers…☆25Updated last year
- Unified Model Interpretability Library for Time Series☆60Updated last month
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆136Updated 4 years ago
- This repository contains the implementation of Label-Free XAI, a new framework to adapt explanation methods to unsupervised models. For m…☆23Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆41Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆66Updated 2 years ago
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information☆33Updated 3 years ago