fredhohman / summit
🏔️ Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
☆109Updated 4 years ago
Related projects: ⓘ
- Towards Automatic Concept-based Explanations☆154Updated 4 months ago
- ☆48Updated 4 years ago
- ☆107Updated last year
- Code for the Proceedings of the National Academy of Sciences 2020 article, "Understanding the Role of Individual Units in a Deep Neural N…☆301Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆125Updated 3 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆124Updated 3 years ago
- ☆130Updated 5 years ago
- Python package for creating rule-based explanations for classifiers.☆59Updated 4 years ago
- Code for the TCAV ML interpretability project☆628Updated last month
- PyTorch implementation of Interpretable Explanations of Black Boxes by Meaningful Perturbation☆334Updated 2 years ago
- Detect model's attention☆150Updated 4 years ago
- Tools for training explainable models using attribution priors.☆121Updated 3 years ago
- Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)☆94Updated 2 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆60Updated 5 years ago
- Release of CIFAR-10.1, a new test set for CIFAR-10.☆218Updated 4 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆173Updated last year
- Code/figures in Right for the Right Reasons☆54Updated 3 years ago
- FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning☆35Updated 4 months ago
- Light version of Network Dissection for Quantifying Interpretability of Networks☆215Updated 5 years ago
- ☆118Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆51Updated 2 years ago
- Papers on interpretable deep learning, for review☆27Updated 6 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)☆214Updated 2 years ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 2 years ago
- A visual analytic system for fair data-driven decision making☆25Updated last year
- Original dataset release for CIFAR-10H☆79Updated 3 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆101Updated 5 months ago
- PyTorch implementation of parity loss as constraints function to realize the fairness of machine learning.☆71Updated last year
- A drop-in replacement for CIFAR-10.☆234Updated 3 years ago