understandable-machine-intelligence-lab / NoiseGradLinks
NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weights
☆22Updated 2 years ago
Alternatives and similar repositories for NoiseGrad
Users that are interested in NoiseGrad are comparing it to the libraries listed below
Sorting:
- Active and Sample-Efficient Model Evaluation☆24Updated 2 weeks ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- 👋 Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)☆30Updated 2 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Robust Out-of-distribution Detection in Neural Networks☆72Updated 3 years ago
- ☆10Updated 3 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- Official PyTorch implementation of the Fishr regularization for out-of-distribution generalization☆86Updated 3 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- ☆18Updated 3 years ago
- Code for CVPR2021 paper: MOOD: Multi-level Out-of-distribution Detection☆38Updated last year
- ☆46Updated 4 years ago
- ☆45Updated 2 years ago
- ☆35Updated 4 years ago
- PRIME: A Few Primitives Can Boost Robustness to Common Corruptions☆42Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 6 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.☆36Updated 3 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- ☆35Updated last year
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatio…☆27Updated 3 years ago
- ☆36Updated 3 years ago
- An official PyTorch implementation of "Regression Prior Networks" for effective runtime uncertainty estimation.☆36Updated 4 years ago
- Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi …☆65Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- ☆24Updated 3 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago