nesl / Explainability-Study
How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods
☆23Updated 4 years ago
Alternatives and similar repositories for Explainability-Study:
Users that are interested in Explainability-Study are comparing it to the libraries listed below
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 2 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆18Updated 5 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆40Updated 4 years ago
- ☆44Updated 2 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- A simple algorithm to identify and correct for label shift.☆21Updated 7 years ago
- ☆18Updated 3 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 2 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- Github for the conference paper GLOD-Gaussian Likelihood OOD detector☆16Updated 2 years ago
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Learning perturbation sets for robust machine learning☆64Updated 3 years ago
- (ICML 2021) Mandoline: Model Evaluation under Distribution Shift☆31Updated 3 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆45Updated 5 years ago
- Official PyTorch implementation for our ICCV 2019 paper - Fooling Network Interpretation in Image Classification☆24Updated 5 years ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆35Updated 9 months ago
- LISA for ICML 2022☆47Updated last year
- Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"☆22Updated 3 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- Interpretation of Neural Network is Fragile☆36Updated 9 months ago
- Simple data balancing baselines for worst-group-accuracy benchmarks.☆41Updated last year
- This repository provides the code for replicating the experiments in the paper "Building One-Shot Semi-supervised (BOSS) Learning up to F…☆36Updated 4 years ago
- Implementation of the paper Identifying Mislabeled Data using the Area Under the Margin Ranking: https://arxiv.org/pdf/2001.10528v2.pdf☆21Updated 5 years ago
- Learning Robust Global Representations by Penalizing Local Predictive Power (NeurIPS 2019))☆18Updated 2 years ago