zhiCHEN96 / ConceptWhitening
☆119Updated 2 years ago
Alternatives and similar repositories for ConceptWhitening:
Users that are interested in ConceptWhitening are comparing it to the libraries listed below
- A basic implementation of Layer-wise Relevance Propagation (LRP) in PyTorch.☆86Updated 2 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆158Updated last year
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information☆32Updated 3 years ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆132Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆144Updated 2 years ago
- Pytorch library for model calibration metrics and visualizations as well as recalibration methods. In progress!☆69Updated 2 weeks ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆200Updated 2 years ago
- Information Bottlenecks for Attribution☆77Updated 2 years ago
- Pytorch implementation of various neural network interpretability methods☆113Updated 2 years ago
- Basic LRP implementation in PyTorch☆167Updated 5 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 3 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆208Updated 5 months ago
- Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)☆95Updated 2 years ago
- Calibration of Convolutional Neural Networks☆160Updated last year
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆352Updated 2 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆131Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆122Updated 7 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆32Updated 9 months ago
- Code to accompany the paper 'Improving model calibration with accuracy versus uncertainty optimization'.☆54Updated 2 years ago
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"☆271Updated 2 years ago
- Implementation of Barlow Twins paper☆99Updated 2 years ago
- ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021☆93Updated 2 years ago
- Combating hidden stratification with GEORGE☆62Updated 3 years ago
- GAN-based method to create counterfactual explanations for chest X-rays☆23Updated 2 years ago
- ☆108Updated 2 years ago
- ☆65Updated 5 years ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationa…☆44Updated 4 years ago
- ☆110Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago