testingautomated-usi / uncertainty-wizardLinks
Uncertainty-Wizard is a plugin on top of tensorflow.keras, allowing to easily and efficiently create uncertainty-aware deep neural networks. Also useful if you want to train multiple small models in parallel.
☆45Updated 2 years ago
Alternatives and similar repositories for uncertainty-wizard
Users that are interested in uncertainty-wizard are comparing it to the libraries listed below
Sorting:
- ☆13Updated 3 years ago
- Tools and data of the paper "Model-based Exploration of the Frontier of Behaviours for Deep Learning System Testing"☆15Updated last year
- Coverage-Guided Testing of Long Short-Term Memory (LSTM) Networks☆18Updated 4 years ago
- Code release of a paper "Guiding Deep Learning System Testing using Surprise Adequacy"☆49Updated 3 years ago
- A collection of dnn test input prioritizers often used as benchmarks in recent literature.☆18Updated 2 years ago
- ☆44Updated 5 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 7 years ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆96Updated 4 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- ☆10Updated 4 years ago
- Utilities to perform Uncertainty Quantification on Keras Models☆117Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆31Updated 2 years ago
- Supervised Local Modeling for Interpretability☆29Updated 6 years ago
- PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)☆30Updated last year
- An uncertainty-based random sampling algorithm for data augmentation☆30Updated 4 years ago
- This repository contains the artifacts accompanied by the paper "Fair Preprocessing"☆13Updated 3 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆221Updated 11 months ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆176Updated 2 years ago
- Taxonomy of Real Faults in Deep Learning Systems☆16Updated 5 years ago
- Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"☆26Updated 5 years ago
- DeepCover: Uncover the truth behind AI☆32Updated last year
- Certifying Geometric Robustness of Neural Networks☆16Updated 2 years ago
- A library for performing coverage guided fuzzing of neural networks☆213Updated 6 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated this week
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated last month
- The code of our paper "Misbehaviour Prediction for Autonomous Driving Systems", including our improved Udacity simulator☆22Updated 4 years ago
- A systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles☆80Updated 6 years ago