understandable-machine-intelligence-lab / NoiseGradLinks
NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weights
☆22Updated 2 years ago
Alternatives and similar repositories for NoiseGrad
Users that are interested in NoiseGrad are comparing it to the libraries listed below
Sorting:
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Updated 3 years ago
- ☆66Updated 6 years ago
- Robust Out-of-distribution Detection in Neural Networks☆73Updated 3 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆161Updated last year
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆43Updated 6 years ago
- Last-layer Laplace approximation code examples☆82Updated 4 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Updated 5 years ago
- ☆122Updated 3 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆43Updated 2 years ago
- Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.☆175Updated 2 years ago
- ☆21Updated 2 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆88Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆162Updated 3 years ago
- ☆51Updated 5 years ago
- ☆112Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ☆25Updated 3 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆139Updated 4 years ago
- Code for "On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty".☆115Updated 3 years ago
- Code-repository for the ICML 2020 paper Fairwashing explanations with off-manifold detergent☆12Updated 5 years ago
- Original dataset release for CIFAR-10H☆82Updated 5 years ago
- NumPy library for calibration metrics☆73Updated 2 months ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆145Updated 2 years ago
- ☆70Updated 6 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆239Updated 4 months ago
- Code for the intrinsic dimensionality estimate of data representations☆86Updated 5 years ago
- Self-Explaining Neural Networks☆43Updated 5 years ago