understandable-machine-intelligence-lab / NoiseGrad
NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weights
☆21Updated last year
Alternatives and similar repositories for NoiseGrad:
Users that are interested in NoiseGrad are comparing it to the libraries listed below
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆35Updated 9 months ago
- CIFAR-5m dataset☆38Updated 4 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆33Updated 9 months ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 2 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- Self-Explaining Neural Networks☆39Updated 4 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- ☆35Updated last year
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆15Updated last year
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆41Updated last year
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 2 years ago
- ☆11Updated 2 years ago
- ☆44Updated 2 years ago
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatio…☆25Updated 3 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information☆32Updated 3 years ago
- Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks☆13Updated 2 years ago
- PyTorch implementation of SmoothTaylor☆15Updated 3 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- ☆18Updated 3 years ago
- ☆55Updated 4 years ago
- Learning perturbation sets for robust machine learning☆64Updated 3 years ago
- Code for the Paper 'On the Connection Between Adversarial Robustness and Saliency Map Interpretability' by C. Etmann, S. Lunz, P. Maass, …☆16Updated 5 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- Provable Worst Case Guarantees for the Detection of Out-of-Distribution Data☆13Updated 2 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆49Updated 3 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness☆43Updated 3 years ago