Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
☆645Jan 20, 2026Updated last month
Alternatives and similar repositories for Quantus
Users that are interested in Quantus are comparing it to the libraries listed below
Sorting:
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆42Apr 17, 2024Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆241Jan 30, 2026Updated last month
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆254Aug 17, 2024Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆141Jan 14, 2026Updated last month
- 👋 Xplique is a Neural Networks Explainability Toolbox☆732Feb 13, 2026Updated 2 weeks ago
- A toolbox to iNNvestigate neural networks' predictions!☆1,307Apr 11, 2025Updated 10 months ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Jan 17, 2024Updated 2 years ago
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers, Paper accepted at eXCV workshop of ECCV 2…☆30Jan 6, 2025Updated last year
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆30Jul 21, 2025Updated 7 months ago
- Model interpretability and understanding for PyTorch☆5,560Updated this week
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆19Jan 28, 2026Updated last month
- Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.☆15Feb 20, 2026Updated last week
- A collection of research materials on explainable AI/ML☆1,617Dec 11, 2025Updated 2 months ago
- ☆15Jun 4, 2025Updated 8 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆56Jul 14, 2025Updated 7 months ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆300Oct 2, 2023Updated 2 years ago
- Source Code of the ROAD benchmark for feature attribution methods (ICML22)☆24Jun 26, 2023Updated 2 years ago
- OmniXAI: A Library for eXplainable AI☆963Jul 23, 2024Updated last year
- Algorithms for explaining machine learning models☆2,612Oct 17, 2025Updated 4 months ago
- How to predict extreme events in climate using rare event algorithms and modern tools of machine learning☆24Mar 27, 2025Updated 11 months ago
- Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).☆1,573Feb 4, 2026Updated 3 weeks ago
- ☆29Jan 30, 2025Updated last year
- bayesian lime☆18Aug 3, 2024Updated last year
- Neural network loss functions for regression and classification tasks that can say "I don't know".☆13Dec 2, 2021Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Mar 22, 2021Updated 4 years ago
- InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。☆259Sep 4, 2024Updated last year
- ☆12Jun 12, 2023Updated 2 years ago
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆992Mar 20, 2024Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆334Nov 30, 2024Updated last year
- This repository contains the implementation of Label-Free XAI, a new framework to adapt explanation methods to unsupervised models. For m…☆25Sep 21, 2022Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆226Jun 28, 2022Updated 3 years ago
- GitHub repository for DORA: Data-agnOstic Representation Analysis paper. DORA allows to find outlier representations in Deep Neural Netwo…☆27Mar 19, 2023Updated 2 years ago
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆16Updated this week
- ☆21Nov 19, 2020Updated 5 years ago
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" 🐍☆45Nov 6, 2024Updated last year
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆139Feb 19, 2021Updated 5 years ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆53Jun 23, 2023Updated 2 years ago
- Quickly build Explainable AI dashboards that show the inner workings of so-called "blackbox" machine learning models.☆2,476Feb 11, 2026Updated 2 weeks ago
- Google Collab Notebooks for the UNIL Spring 2022 course on ML for Earth and Environmental Sciences☆14Aug 18, 2022Updated 3 years ago