chihkuanyeh / saliency_evaluation
Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for evaluating any saliency explanations.
☆25Updated 3 years ago
Alternatives and similar repositories for saliency_evaluation:
Users that are interested in saliency_evaluation are comparing it to the libraries listed below
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Interpretation of Neural Network is Fragile☆36Updated 11 months ago
- ☆46Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- ☆51Updated 4 years ago
- Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)☆100Updated 3 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- ☆54Updated 4 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- NeurIPS Reproducbility Challenge 2019☆9Updated 5 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆71Updated 11 months ago
- Learning from Failure: Training Debiased Classifier from Biased Classifier (NeurIPS 2020)☆91Updated 4 years ago
- ☆37Updated 2 years ago
- Robust Out-of-distribution Detection in Neural Networks☆72Updated 3 years ago
- ☆38Updated 3 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆44Updated 5 years ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆42Updated last year
- Outlier Exposure with Confidence Control for Out-of-Distribution Detection☆69Updated 4 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆22Updated 4 years ago
- ☆73Updated 5 years ago
- ☆37Updated 4 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆42Updated 6 years ago
- ☆34Updated 4 years ago
- ☆45Updated 2 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- ☆140Updated 4 years ago
- A way to achieve uniform confidence far away from the training data.☆38Updated 4 years ago
- An implementation of the Residual Flow algorithm for out-of-distribution detection.☆30Updated 2 years ago
- Code for "Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers"☆27Updated 3 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 4 years ago