annahedstroem / MetaQuantus
MetaQuantus is an XAI performance tool to identify reliable evaluation metrics
☆30Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for MetaQuantus
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆118Updated 5 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆203Updated 4 months ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆56Updated last year
- LENS Project☆42Updated 9 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆57Updated last year
- h-Shap provides an exact, fast, hierarchical implementation of Shapley coefficients for image explanations☆15Updated last year
- CoSy: Evaluating Textual Explanations☆14Updated last month
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see ht…☆27Updated last week
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆89Updated last month
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆60Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆233Updated 3 months ago
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆100Updated last week
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆27Updated 2 years ago
- ☆23Updated last year
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆33Updated this week
- ☆10Updated last year
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆19Updated 10 months ago
- An Empirical Framework for Domain Generalization In Clinical Settings☆28Updated 2 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆53Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- 👋 Influenciae is a Tensorflow Toolbox for Influence Functions☆56Updated 7 months ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆76Updated last year
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆27Updated 2 years ago
- ☆58Updated 2 years ago
- ☆13Updated last year
- ☆41Updated last year
- Bayesianize: A Bayesian neural network wrapper in pytorch☆87Updated 6 months ago
- ViRelAy is a visualization tool for the analysis of data as generated by CoRelAy.☆27Updated this week