understandable-machine-intelligence-lab / QuantusLinks
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
β624Updated 2 months ago
Alternatives and similar repositories for Quantus
Users that are interested in Quantus are comparing it to the libraries listed below
Sorting:
- π Xplique is a Neural Networks Explainability Toolboxβ702Updated 11 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β233Updated 2 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated last year
- OmniXAI: A Library for eXplainable AIβ952Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ39Updated last year
- A Library for Uncertainty Quantification.β920Updated 5 months ago
- A toolbox to iNNvestigate neural networks' predictions!β1,303Updated 5 months ago
- β488Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithmsβ295Updated 2 years ago
- For calculating global feature importance using Shapley values.β276Updated this week
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty iβ¦β266Updated 2 weeks ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ158Updated 3 years ago
- A collection of research materials on explainable AI/MLβ1,560Updated this week
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ64Updated last year
- Experiments on Tabular Data Modelsβ279Updated 2 years ago
- Open-source framework for uncertainty and deep learning models in PyTorchβ432Updated last week
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ100Updated 6 months ago
- π‘ Adversarial attacks on explanations and how to defend themβ328Updated 10 months ago
- β122Updated 3 years ago
- A toolkit for quantitative evaluation of data attribution methods.β53Updated 2 months ago
- The net:cal calibration framework is a Python 3 library for measuring and mitigating miscalibration of uncertainty estimates, e.g., by a β¦β366Updated last year
- The official PyTorch implementation of recent paper - SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive β¦β451Updated 3 years ago
- A framework for prototyping and benchmarking imputation methodsβ196Updated 2 years ago
- β207Updated 4 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β74Updated 3 years ago
- scikit-activeml: A Comprehensive and User-friendly Active Learning Libraryβ173Updated 2 weeks ago
- Generate Diverse Counterfactual Explanations for any machine learning model.β1,446Updated 2 months ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).β138Updated 4 years ago
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also inβ¦β755Updated 5 years ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β248Updated 2 years ago