understandable-machine-intelligence-lab / QuantusLinks
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
β634Updated 5 months ago
Alternatives and similar repositories for Quantus
Users that are interested in Quantus are comparing it to the libraries listed below
Sorting:
- π Xplique is a Neural Networks Explainability Toolboxβ724Updated this week
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β239Updated 4 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ139Updated last year
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ225Updated 3 years ago
- A toolbox to iNNvestigate neural networks' predictions!β1,307Updated 8 months ago
- OmniXAI: A Library for eXplainable AIβ959Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ40Updated last year
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty iβ¦β268Updated 3 months ago
- β500Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithmsβ298Updated 2 years ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ163Updated 3 years ago
- A Library for Uncertainty Quantification.β924Updated 8 months ago
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ64Updated last year
- Experiments on Tabular Data Modelsβ281Updated 2 years ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.β102Updated 3 years ago
- For calculating global feature importance using Shapley values.β282Updated last week
- The net:cal calibration framework is a Python 3 library for measuring and mitigating miscalibration of uncertainty estimates, e.g., by a β¦β369Updated last year
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).β139Updated 4 years ago
- Lightweight, useful implementation of conformal prediction on real data.β995Updated last month
- π‘ Adversarial attacks on explanations and how to defend themβ330Updated last year
- Open-source framework for uncertainty and deep learning models in PyTorchβ462Updated 2 weeks ago
- The WeightWatcher tool for predicting the accuracy of Deep Neural Networksβ1,699Updated last week
- A collection of research materials on explainable AI/MLβ1,593Updated 2 weeks ago
- Generate Diverse Counterfactual Explanations for any machine learning model.β1,478Updated 5 months ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β74Updated 3 years ago
- scikit-activeml: A Comprehensive and User-friendly Active Learning Libraryβ180Updated last week
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ75Updated 3 years ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ100Updated 9 months ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also inβ¦β761Updated 5 years ago