Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
☆649Mar 9, 2026Updated last week
Alternatives and similar repositories for Quantus
Users that are interested in Quantus are comparing it to the libraries listed below
Sorting:
- Mechanistic understanding and validation of large AI models with SemanticLens☆51Dec 4, 2025Updated 3 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Aug 17, 2024Updated last year
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.☆20Jan 17, 2024Updated 2 years ago
- Layer-wise Relevance Propagation for Large Language Models and Vision Transformers [ICML 2024]☆227Jul 11, 2025Updated 8 months ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆733Feb 24, 2026Updated 3 weeks ago
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers, Paper accepted at eXCV workshop of ECCV 2…☆30Jan 6, 2025Updated last year
- CoRelAy is a tool to compose small-scale (single-machine) analysis pipelines.☆31Jul 21, 2025Updated 8 months ago
- Explainable AI in Julia.☆116Mar 9, 2026Updated last week
- Prototypical Concept-based Explanations, accepted at SAIAD workshop at CVPR 2024.☆15Feb 20, 2026Updated last month
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆22May 11, 2023Updated 2 years ago
- Interpretability and explainability of data and machine learning models☆1,768Feb 26, 2025Updated last year
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆19Jan 28, 2026Updated last month
- ☆15Jun 4, 2025Updated 9 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆57Jul 14, 2025Updated 8 months ago
- A Python framework for the quantitative evaluation of eXplainable AI methods☆18Mar 29, 2023Updated 2 years ago
- Model interpretability and understanding for PyTorch☆5,580Mar 11, 2026Updated last week
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.☆15Jan 16, 2024Updated 2 years ago
- A collection of research materials on explainable AI/ML☆1,623Mar 7, 2026Updated 2 weeks ago
- GitHub repository for DORA: Data-agnOstic Representation Analysis paper. DORA allows to find outlier representations in Deep Neural Netwo…☆27Mar 19, 2023Updated 3 years ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Works…☆20May 29, 2024Updated last year
- Algorithms for explaining machine learning models☆2,621Oct 17, 2025Updated 5 months ago
- OmniXAI: A Library for eXplainable AI☆963Jul 23, 2024Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆300Oct 2, 2023Updated 2 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆335Nov 30, 2024Updated last year
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Jun 22, 2022Updated 3 years ago
- ☆29Jan 30, 2025Updated last year
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Mar 22, 2021Updated 5 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆77Mar 26, 2022Updated 3 years ago
- InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。☆261Sep 4, 2024Updated last year
- Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).☆994Mar 20, 2024Updated 2 years ago
- 'Robust Semantic Interpretability: Revisiting Concept Activation Vectors' Official Implementation☆11Jul 15, 2020Updated 5 years ago
- [XAI4CV CVPR 2023] Towards Evaluating Explanations of Vision Transformers for Medical Imaging☆10Dec 1, 2023Updated 2 years ago
- [NeurIPS 2024] Official implementation of the paper "MambaLRP: Explaining Selective State Space Sequence Models" 🐍☆45Nov 6, 2024Updated last year
- Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).☆1,573Feb 24, 2026Updated 3 weeks ago
- How to predict extreme events in climate using rare event algorithms and modern tools of machine learning☆24Mar 27, 2025Updated 11 months ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆226Jun 28, 2022Updated 3 years ago
- Repository for the paper "Interpreting Temporal Graph Neural Networks with Koopman Theory"☆12Jan 30, 2025Updated last year
- xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology☆26Feb 25, 2026Updated 3 weeks ago
- Code for RELAX, a framework for explaining representations.☆12Jan 7, 2024Updated 2 years ago