suinleelab / path_explain
A repository for explaining feature attributions and feature interactions in deep neural networks.
β185Updated 2 years ago
Related projects β
Alternatives and complementary repositories for path_explain
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β125Updated 3 years ago
- Tools for training explainable models using attribution priors.β121Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.β57Updated 3 years ago
- β264Updated 4 years ago
- Algorithms for abstention, calibration and domain adaptation to label shift.β36Updated 4 years ago
- Neural Additive Models (Google Research)β67Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β127Updated 3 years ago
- Enabling easy statistical significance testing for deep neural networks.β330Updated 4 months ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β143Updated 2 years ago
- Weakly Supervised End-to-End Learning (NeurIPS 2021)β153Updated last year
- All about explainable AI, algorithmic fairness and moreβ107Updated last year
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2β¦β37Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ57Updated last year
- Repo for the Tutorials of Day1-Day3 of the Nordic Probabilistic AI School 2021 (https://probabilistic.ai/)β47Updated 3 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β76Updated last year
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)β219Updated 2 years ago
- For calculating global feature importance using Shapley values.β253Updated this week
- A Machine Learning workflow for Slurm.β146Updated 3 years ago
- β117Updated 2 years ago
- β124Updated 3 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β129Updated 4 years ago
- Reusable BatchBALD implementationβ74Updated 8 months ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective tβ¦β173Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ233Updated 3 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AIβ52Updated 2 years ago
- β131Updated 5 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ73Updated 2 years ago
- Combating hidden stratification with GEORGEβ62Updated 3 years ago
- Measure and visualize machine learning model performance without the usual boilerplate.β96Updated 2 months ago
- This repository contains implementations of algorithms proposed in recent papers from top machine learning conferences on Fairness, Accouβ¦β33Updated 2 years ago