suinleelab / path_explainLinks
A repository for explaining feature attributions and feature interactions in deep neural networks.
β187Updated 3 years ago
Alternatives and similar repositories for path_explain
Users that are interested in path_explain are comparing it to the libraries listed below
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β128Updated 3 years ago
- Tools for training explainable models using attribution priors.β124Updated 4 years ago
- β264Updated 5 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β127Updated 4 years ago
- Algorithms for abstention, calibration and domain adaptation to label shift.β36Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.β59Updated 3 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- Neural Additive Models (Google Research)β70Updated 3 years ago
- A Machine Learning workflow for Slurm.β149Updated 4 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β150Updated 2 years ago
- Reusable BatchBALD implementationβ78Updated last year
- β125Updated 4 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β131Updated 4 years ago
- Enabling easy statistical significance testing for deep neural networks.β335Updated 11 months ago
- Implicit MLE: Backpropagating Through Discrete Exponential Family Distributionsβ258Updated last year
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2β¦β39Updated 2 years ago
- Check if you have training samples in your test setβ64Updated 3 years ago
- List of relevant resources for machine learning from explanatory supervisionβ157Updated 4 months ago
- Weakly Supervised End-to-End Learning (NeurIPS 2021)β157Updated 2 years ago
- Repo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)β48Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 9 months ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labelingβ31Updated 4 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119β¦β104Updated last year
- For calculating global feature importance using Shapley values.β270Updated last week
- Multislice PHATE for tensor embeddingsβ59Updated 4 years ago
- All about explainable AI, algorithmic fairness and moreβ108Updated last year
- Pytorch implementation of VAEs for heterogeneous likelihoods.β42Updated 2 years ago
- Python implementation of GLN in different frameworksβ98Updated 4 years ago
- A library for uncertainty quantification based on PyTorchβ121Updated 3 years ago
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)β231Updated 3 years ago