understandable-machine-intelligence-lab / NoiseGradLinks
NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weights
☆22Updated 2 years ago
Alternatives and similar repositories for NoiseGrad
Users that are interested in NoiseGrad are comparing it to the libraries listed below
Sorting:
- ☆66Updated 5 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆145Updated 2 years ago
- Robust Out-of-distribution Detection in Neural Networks☆73Updated 3 years ago
- Last-layer Laplace approximation code examples☆83Updated 4 years ago
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"☆273Updated 3 years ago
- Code to accompany the paper 'Improving model calibration with accuracy versus uncertainty optimization'.☆56Updated 2 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆161Updated last year
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- Code for "On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty".☆114Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ☆46Updated 4 years ago
- Official implementation for Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020☆50Updated 4 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆43Updated 2 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Active and Sample-Efficient Model Evaluation☆25Updated 5 months ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆75Updated 2 years ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆160Updated 3 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Updated 3 years ago
- Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.☆175Updated 2 years ago
- ☆25Updated 3 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- ☆13Updated 5 years ago
- Implementation of the paper "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and featu…☆42Updated 4 years ago
- ☆112Updated 2 years ago
- ☆21Updated 2 years ago