Chacha-Chen / Explanations-Human-Studies
This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanations in human-AI interactions.
☆12Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for Explanations-Human-Studies
- An Empirical Study of Invariant Risk Minimization☆28Updated 4 years ago
- ☆65Updated 3 months ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- Code for "Generative causal explanations of black-box classifiers"☆33Updated 3 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Explaining neural decisions contrastively to alternative decisions.☆23Updated 3 years ago
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 3 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- ☆27Updated 4 years ago
- This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help …☆23Updated last year
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- ☆35Updated last year
- This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)☆12Updated 2 years ago
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆36Updated 6 months ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆40Updated 2 years ago
- Code for "Neural causal learning from unknown interventions"☆99Updated 4 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.☆51Updated 3 years ago
- ☆30Updated 3 years ago
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆39Updated last year
- Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding☆21Updated last year
- Distributional Shapley: A Distributional Framework for Data Valuation☆30Updated 6 months ago
- Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net…☆49Updated last year
- Implementation of Adversarial Debiasing in PyTorch to address Gender Bias☆30Updated 4 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- DISSECT: Disentangled Simultaneous Explanations via Concept Traversals☆11Updated 9 months ago
- Code for Environment Inference for Invariant Learning (ICML 2021 Paper)☆49Updated 3 years ago
- Explaining a black-box using Deep Variational Information Bottleneck Approach☆46Updated 2 years ago
- GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model’s Prediction. Thai Le, Suhang Wang, Dongwon …☆22Updated 3 years ago
- ☆37Updated 3 years ago