understandable-machine-intelligence-lab / NoiseGradLinks
NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weights
☆22Updated 2 years ago
Alternatives and similar repositories for NoiseGrad
Users that are interested in NoiseGrad are comparing it to the libraries listed below
Sorting:
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated last month
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Contains notebooks for the PAR tutorial at CVPR 2021.☆36Updated 3 years ago
- Self-Explaining Neural Networks☆42Updated 5 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆35Updated last year
- ☆45Updated 2 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- CIFAR-5m dataset☆39Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆61Updated 7 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 3 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- Supporting code for the paper "Dangers of Bayesian Model Averaging under Covariate Shift"☆33Updated 2 years ago
- Code to reproduce experiments from 'Does Knowledge Distillation Really Work' a paper which appeared in the 2021 NeurIPS proceedings.☆33Updated last year
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆49Updated 4 years ago
- Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data☆13Updated 2 years ago
- Official implementation for Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder at NeurIPS 2020☆50Updated 4 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆26Updated last year
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆34Updated last year
- ☆38Updated 4 years ago
- PRIME: A Few Primitives Can Boost Robustness to Common Corruptions☆42Updated 2 years ago
- Robust Out-of-distribution Detection in Neural Networks☆73Updated 3 years ago
- Implementation of the paper "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and featu…☆42Updated 4 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 3 years ago
- Wrap around any model to output differentially private prediction sets with finite sample validity on any dataset.☆17Updated last year
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year