IBM / UQ360Links
Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty in machine learning model predictions.
☆266Updated last month
Alternatives and similar repositories for UQ360
Users that are interested in UQ360 are comparing it to the libraries listed below
Sorting:
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆290Updated last year
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true cla…☆241Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Editing machine learning models to reflect human knowledge and values☆126Updated last year
- WeightedSHAP: analyzing and improving Shapley based feature attributions (NeurIPS 2022)☆160Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆603Updated 4 months ago
- A library for uncertainty quantification based on PyTorch☆121Updated 3 years ago
- ☆470Updated last month
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlig…☆150Updated 2 years ago
- For calculating global feature importance using Shapley values.☆271Updated last week
- scikit-activeml: Python library for active learning on top of scikit-learn☆168Updated 3 weeks ago
- Conformalized Quantile Regression☆275Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- A Python package for building Bayesian models with TensorFlow or PyTorch☆174Updated 2 years ago
- python tools to check recourse in linear classification☆76Updated 4 years ago
- A Python package for unwrapping ReLU DNNs☆70Updated last year
- A Library for Uncertainty Quantification.☆916Updated last month
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 10 months ago
- ☆50Updated last year
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆105Updated last year
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆129Updated 2 years ago
- ⬛ Python Individual Conditional Expectation Plot Toolbox☆165Updated 5 years ago
- All about explainable AI, algorithmic fairness and more☆109Updated last year
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Benchmarking synthetic data generation methods.☆274Updated last week
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆67Updated 2 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆151Updated 3 years ago
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"☆272Updated 3 years ago
- Drift Detection for your PyTorch Models☆317Updated 2 years ago