annahedstroem / MetaQuantus
MetaQuantus is an XAI performance tool to identify reliable evaluation metrics
☆33Updated 10 months ago
Alternatives and similar repositories for MetaQuantus:
Users that are interested in MetaQuantus are comparing it to the libraries listed below
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆209Updated 7 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆123Updated 8 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆39Updated this week
- CoSy: Evaluating Textual Explanations☆14Updated 3 weeks ago
- ☆11Updated last year
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆62Updated 2 years ago
- Conformal prediction for uncertainty quantification in image segmentation☆20Updated 2 months ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆61Updated last year
- LENS Project☆46Updated 11 months ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆19Updated last year
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotator☆10Updated 2 years ago
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆124Updated last week
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆91Updated last week
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- Updated code base for GlanceNets: Interpretable, Leak-proof Concept-based models☆25Updated last year
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆21Updated last year
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Works…☆12Updated 8 months ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆13Updated last year
- ☆50Updated 3 weeks ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 2 years ago
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆27Updated 2 years ago
- ☆44Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆62Updated 2 years ago
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆27Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 2 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆86Updated 2 years ago
- A fairness library in PyTorch.☆27Updated 6 months ago