understandable-machine-intelligence-lab / NoiseGradLinks
NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weights
☆22Updated 2 years ago
Alternatives and similar repositories for NoiseGrad
Users that are interested in NoiseGrad are comparing it to the libraries listed below
Sorting:
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- ☆66Updated 5 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Code for "On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty".☆114Updated 3 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ☆21Updated 2 years ago
- ☆68Updated 6 years ago
- Pruning CNN using CNN with toy example☆21Updated 4 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 4 months ago
- Robust Out-of-distribution Detection in Neural Networks☆73Updated 3 years ago
- ☆122Updated 3 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆161Updated last year
- Last-layer Laplace approximation code examples☆84Updated 3 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆138Updated 4 years ago
- Reusable BatchBALD implementation☆79Updated last year
- Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.☆175Updated 2 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆144Updated 2 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- Code to accompany the paper 'Improving model calibration with accuracy versus uncertainty optimization'.☆55Updated 2 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆158Updated 3 years ago
- Original dataset release for CIFAR-10H☆83Updated 4 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- ☆10Updated 3 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotator☆10Updated 2 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 3 years ago
- Implementation of the paper "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and featu…☆42Updated 4 years ago