understandable-machine-intelligence-lab / NoiseGrad
NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weights
☆22Updated last year
Alternatives and similar repositories for NoiseGrad:
Users that are interested in NoiseGrad are comparing it to the libraries listed below
- Active and Sample-Efficient Model Evaluation☆24Updated 4 years ago
- Self-Explaining Neural Networks☆42Updated 5 years ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 2 years ago
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆29Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- ☆45Updated 2 years ago
- Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi …☆65Updated 4 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆49Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- This repository contains an official implementation of LPBNN.☆39Updated last year
- ☆35Updated last year
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆160Updated last year
- CIFAR-5m dataset☆38Updated 4 years ago
- Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding☆73Updated 3 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆42Updated 6 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Guarantees on the behavior of neural networks don't always have to come at the cost of performance.☆28Updated 2 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.☆36Updated 3 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆60Updated 6 years ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆56Updated last year
- Robust Out-of-distribution Detection in Neural Networks☆72Updated 3 years ago
- Code for "On Feature Collapse and Deep Kernel Learning for Single Forward Pass Uncertainty".☆112Updated 2 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatio…☆27Updated 3 years ago
- ☆46Updated 4 years ago
- Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight https://openreview.net/forum?id=XJk19XzGq2J☆68Updated last year
- Contains code for the NeurIPS 2020 paper by Pan et al., "Continual Deep Learning by FunctionalRegularisation of Memorable Past"☆44Updated 4 years ago