annahedstroem / MetaQuantus
MetaQuantus is an XAI performance tool to identify reliable evaluation metrics
☆30Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for MetaQuantus
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆118Updated 4 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆200Updated 3 months ago
- LENS Project☆42Updated 8 months ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆55Updated last year
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆76Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆57Updated last year
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆59Updated last year
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotator☆10Updated 2 years ago
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆120Updated 2 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆32Updated this week
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆27Updated 2 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆19Updated 9 months ago
- Bayesianize: A Bayesian neural network wrapper in pytorch☆86Updated 5 months ago
- ☆118Updated 2 years ago
- CoSy: Evaluating Textual Explanations☆14Updated 3 weeks ago
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆95Updated 2 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆52Updated last year
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆21Updated last year
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year
- ☆22Updated last year
- h-Shap provides an exact, fast, hierarchical implementation of Shapley coefficients for image explanations☆15Updated 11 months ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆36Updated 6 months ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆89Updated 3 weeks ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆29Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- Data Augmentation with Variational Autoencoders (TPAMI)☆136Updated 2 years ago
- Pytorch code for "Improving Self-Supervised Learning by Characterizing Idealized Representations"☆40Updated last year