YilunZhou / feature-attribution-evaluation
Code repository for the AAAI 2022 paper "Do Feature Attribution Methods Correctly Attribute Features?"
☆19Updated 2 years ago
Alternatives and similar repositories for feature-attribution-evaluation:
Users that are interested in feature-attribution-evaluation are comparing it to the libraries listed below
- Code repository for the AISTATS 2021 paper "Towards Understanding the Optimal Behaviors of Deep Active Learning Algorithms"☆15Updated 3 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding☆21Updated 2 years ago
- Uncertainty in Conditional Average Treatment Effect Estimation☆29Updated 3 years ago
- Explanation Optimization☆13Updated 4 years ago
- Improving Transformation Invariance in Contrastive Representation Learning☆13Updated 3 years ago
- NeurIPS 2022: Tree Mover’s Distance: Bridging Graph Metrics and Stability of Graph Neural Networks☆36Updated last year
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆40Updated last year
- Wrap around any model to output differentially private prediction sets with finite sample validity on any dataset.☆17Updated 10 months ago
- Self-Explaining Neural Networks☆39Updated 4 years ago
- ☆17Updated 6 years ago
- Random feature latent variable models in Python☆22Updated last year
- Companion code for the paper "Learnable Uncertainty under Laplace Approximations" (UAI 2021).☆19Updated 3 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- Parameter-Space Saliency Maps for Explainability☆23Updated last year
- General purpose library for BNNs, and implementation of OC-BNNs in our 2020 NeurIPS paper.☆38Updated 2 years ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 2 years ago
- Code for experiments to learn uncertainty☆30Updated last year
- [NeurIPS 2020] Coresets for Robust Training of Neural Networks against Noisy Labels☆32Updated 3 years ago
- Python package for evaluating model calibration in classification☆19Updated 5 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Pytorch implementation of VAEs for heterogeneous likelihoods.☆42Updated 2 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year
- Supercharging Imbalanced Data Learning WithCausal Representation Transfer☆12Updated 3 years ago
- Codebase for "Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via Higher-Order Influence Functions", ICML 2020.☆8Updated 4 years ago
- Label shift experiments☆15Updated 4 years ago