fredhohman / summit-notebooks
Notebooks for Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
β15Updated 5 years ago
Alternatives and similar repositories for summit-notebooks:
Users that are interested in summit-notebooks are comparing it to the libraries listed below
- ποΈ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizationsβ114Updated 5 years ago
- [ICLR 2023 spotlight] MEDFAIR: Benchmarking Fairness for Medical Imagingβ63Updated last year
- Explanation by Progressive Exaggerationβ20Updated 2 years ago
- Python implementation of activation maximization with PyTorch.β27Updated 4 years ago
- This repository contains the implementation of Concept Activation Regions, a new framework to explain deep neural networks with human conβ¦β11Updated 2 years ago
- Towards Efficient Shapley Value Estimation via Cross-contribution Maximizationβ14Updated 2 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β61Updated 3 weeks ago
- This repository provides details of the experimental code in the paper: Instance-based Counterfactual Explanations for Time Series Classiβ¦β18Updated 3 years ago
- Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations is a ServiceNow Research project that was started at Elemenβ¦β13Updated last year
- Towards Robust Interpretability with Self-Explaining Neural Networks, Alvarez-Melis et al. 2018β15Updated 5 years ago
- An Empirical Framework for Domain Generalization In Clinical Settingsβ30Updated 3 years ago
- A Data-Centric library providing a unified interface for state-of-the-art methods for hardness characterisation of data points.β24Updated last month
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"β35Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 3 years ago
- Official repository of ICML 2023 paper: Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeatβ23Updated last year
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"β53Updated 3 years ago
- Code for Diff-SCM paperβ96Updated last year
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β130Updated 4 years ago
- [ICML 2023] Change is Hard: A Closer Look at Subpopulation Shiftβ108Updated last year
- Implementation of Concept-level Debugging of Part-Prototype Networksβ12Updated last year
- A benchmark for distribution shift in tabular dataβ52Updated 10 months ago
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)β96Updated 2 months ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'β17Updated 2 years ago
- This is the implementation for the NeurIPS 2022 paper: ZIN: When and How to Learn Invariance Without Environment Partition?β22Updated 2 years ago
- β43Updated 2 weeks ago
- An amortized approach for calculating local Shapley value explanationsβ97Updated last year
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.β19Updated last year
- This is the code for the paper Bayesian Invariant Risk Minmization of CVPR 2022.β44Updated last year
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Informationβ33Updated 3 years ago
- For calculating Shapley values via linear regression.β67Updated 3 years ago