annahedstroem / MetaQuantusLinks
MetaQuantus is an XAI performance tool to identify reliable evaluation metrics
β36Updated last year
Alternatives and similar repositories for MetaQuantus
Users that are interested in MetaQuantus are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ67Updated 3 months ago
- A toolkit for quantitative evaluation of data attribution methods.β49Updated this week
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ130Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β227Updated this week
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 10 months ago
- Dataset and code for the CLEVR-XAI dataset.β31Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ69Updated 2 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β65Updated last year
- LENS Projectβ48Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- β121Updated 3 years ago
- TabDPT: Scaling Tabular Foundation Models on Real Dataβ33Updated this week
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ96Updated 4 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ607Updated last week
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ63Updated last year
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ55Updated 2 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.β155Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β63Updated last month
- Reliability diagrams visualize whether a classifier model needs calibrationβ153Updated 3 years ago
- Layer-Wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]β172Updated this week
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.β16Updated 3 weeks ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weigβ¦β22Updated 2 years ago
- β139Updated last year
- relplot: Utilities for measuring calibration and plotting reliability diagramsβ163Updated last week
- XAI Experiments on an Annotated Dataset of Wild Bee Imagesβ19Updated 7 months ago
- Official PyTorch implementation of improved B-cos modelsβ50Updated last year
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classiβ¦β130Updated 2 years ago
- Large-scale uncertainty benchmark in deep learning.β60Updated 2 months ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β67Updated 2 years ago
- Uncertainty-aware representation learning (URL) benchmarkβ105Updated 4 months ago