zhiCHEN96 / ConceptWhitening
β120Updated 3 years ago
Alternatives and similar repositories for ConceptWhitening:
Users that are interested in ConceptWhitening are comparing it to the libraries listed below
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"β160Updated last year
- Combating hidden stratification with GEORGEβ63Updated 3 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β128Updated 3 years ago
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.β89Updated 2 years ago
- NumPy library for calibration metricsβ69Updated last month
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β127Updated 4 years ago
- Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)β96Updated 2 years ago
- Pytorch implementation of various neural network interpretability methodsβ116Updated 3 years ago
- Tools for training explainable models using attribution priors.β123Updated 4 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated 11 months ago
- Detect model's attentionβ165Updated 4 years ago
- β109Updated 2 years ago
- β51Updated 4 years ago
- implements some LRP rules to get explanations for Resnets and Densenet-121, including batchnorm-Conv canonization and tensorbiased layersβ¦β25Updated last year
- Implementation of Barlow Twins paperβ100Updated 2 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).β135Updated 4 years ago
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Informationβ32Updated 3 years ago
- Towards Automatic Concept-based Explanationsβ159Updated 10 months ago
- Optimal Transport Dataset Distanceβ162Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β77Updated 2 years ago
- Calibration of Convolutional Neural Networksβ160Updated last year
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ204Updated 2 years ago
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"β272Updated 3 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ124Updated 9 months ago
- TensorFlow 2 implementation of the paper Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution β¦β45Updated 3 years ago
- Estimating Example Difficulty using Variance of Gradientsβ63Updated 2 years ago
- This repository contains the code of the distribution shift framework presented in A Fine-Grained Analysis on Distribution Shift (Wiles eβ¦β81Updated 2 weeks ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ146Updated 3 years ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpretaβ¦β358Updated 2 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertaintyβ136Updated last year