testingautomated-usi / uncertainty-wizardLinks
Uncertainty-Wizard is a plugin on top of tensorflow.keras, allowing to easily and efficiently create uncertainty-aware deep neural networks. Also useful if you want to train multiple small models in parallel.
☆45Updated 2 years ago
Alternatives and similar repositories for uncertainty-wizard
Users that are interested in uncertainty-wizard are comparing it to the libraries listed below
Sorting:
- Tools and data of the paper "Model-based Exploration of the Frontier of Behaviours for Deep Learning System Testing"☆15Updated last year
- Code release of a paper "Guiding Deep Learning System Testing using Surprise Adequacy"☆50Updated 3 years ago
- ☆13Updated 3 years ago
- A collection of dnn test input prioritizers often used as benchmarks in recent literature.☆19Updated 2 years ago
- ☆10Updated 4 years ago
- DeepCrime - Mutation Testing Tool for Deep Learning Systems☆15Updated 2 years ago
- ETH Robustness Analyzer for Deep Neural Networks☆344Updated 2 years ago
- Replication Code for "Self-Supervised Bug Detection and Repair" NeurIPS 2021☆112Updated 3 years ago
- Coverage-Guided Testing of Long Short-Term Memory (LSTM) Networks☆18Updated 5 years ago
- ☆25Updated 4 years ago
- The code of our paper "Misbehaviour Prediction for Autonomous Driving Systems", including our improved Udacity simulator☆22Updated 4 years ago
- Taxonomy of Real Faults in Deep Learning Systems☆15Updated 5 years ago
- Contrastive Code Representation Learning: functionality-based JavaScript embeddings through self-supervised learning☆168Updated 4 years ago
- A library for performing coverage guided fuzzing of neural networks☆214Updated 7 years ago
- Certifying Geometric Robustness of Neural Networks☆16Updated 2 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆177Updated 2 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆298Updated 2 years ago
- Utilities to perform Uncertainty Quantification on Keras Models☆119Updated last year
- Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks? (FSE 2020)☆10Updated 4 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆68Updated 5 months ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆222Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆85Updated 3 years ago
- ☆44Updated 5 years ago
- Generating Adversarial Examples for Holding Robustness of Source Code Processing Models☆14Updated 4 years ago
- PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)☆30Updated last year
- Code release for RobOT (ICSE'21)☆15Updated 3 years ago
- The official repo for GCP-CROWN paper☆13Updated 3 years ago