annahedstroem / MetaQuantus
MetaQuantus is an XAI performance tool to identify reliable evaluation metrics
β34Updated last year
Alternatives and similar repositories for MetaQuantus:
Users that are interested in MetaQuantus are comparing it to the libraries listed below
- π Overcomplete is a Vision-based SAE Toolboxβ53Updated last month
- A toolkit for quantitative evaluation of data attribution methods.β45Updated 2 weeks ago
- LENS Projectβ48Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ125Updated 10 months ago
- β12Updated 2 weeks ago
- CoSy: Evaluating Textual Explanationsβ16Updated 3 months ago
- h-Shap provides an exact, fast, hierarchical implementation of Shapley coefficients for image explanationsβ16Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β224Updated 9 months ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ96Updated last month
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.β17Updated last year
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β66Updated 2 years ago
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ63Updated last year
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β153Updated last month
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ54Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ65Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ245Updated 8 months ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ30Updated last year
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β14Updated 11 months ago
- β39Updated 11 months ago
- Fairness toolkit for pytorch, scikit learn and autogluonβ32Updated 4 months ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.β19Updated last year
- β11Updated this week
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curvβ¦β16Updated 2 years ago
- Large-scale uncertainty benchmark in deep learning.β56Updated 3 months ago
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classiβ¦β129Updated 2 years ago
- A benchmark for distribution shift in tabular dataβ52Updated 10 months ago
- Influence Estimation for Gradient-Boosted Decision Treesβ27Updated 11 months ago