annahedstroem / MetaQuantusLinks
MetaQuantus is an XAI performance tool to identify reliable evaluation metrics
β35Updated last year
Alternatives and similar repositories for MetaQuantus
Users that are interested in MetaQuantus are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ63Updated 2 months ago
- A toolkit for quantitative evaluation of data attribution methods.β48Updated this week
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β65Updated last year
- β13Updated last month
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ129Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β226Updated 11 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ55Updated 2 years ago
- CoSy: Evaluating Textual Explanationsβ16Updated 5 months ago
- Large-scale uncertainty benchmark in deep learning.β60Updated last month
- LENS Projectβ48Updated last year
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- TabDPT: Scaling Tabular Foundation Modelsβ30Updated 2 months ago
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ63Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ32Updated last year
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ96Updated 3 months ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Worksβ¦β15Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 10 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ67Updated 2 years ago
- β120Updated 3 years ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β67Updated 2 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'β17Updated 3 years ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- β45Updated 2 years ago
- XAI Experiments on an Annotated Dataset of Wild Bee Imagesβ19Updated 6 months ago
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classiβ¦β129Updated 2 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.β20Updated last year
- β39Updated last year
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 2 years ago
- h-Shap provides an exact, fast, hierarchical implementation of Shapley coefficients for image explanationsβ16Updated last year
- PyTorch Explain: Interpretable Deep Learning in Python.β156Updated last year