Chacha-Chen / Explanations-Human-Studies
This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanations in human-AI interactions.
☆13Updated 6 months ago
Alternatives and similar repositories for Explanations-Human-Studies:
Users that are interested in Explanations-Human-Studies are comparing it to the libraries listed below
- ☆27Updated 4 years ago
- Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"☆17Updated 2 years ago
- ☆44Updated 2 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- Code for "Generative causal explanations of black-box classifiers"☆33Updated 4 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- ☆17Updated 2 years ago
- Explaining neural decisions contrastively to alternative decisions.☆23Updated 3 years ago
- Code to study the generalisability of benchmark models on non-stationary EHRs.☆14Updated 5 years ago
- This repo includes our code for evaluating and improving transferability in domain generalization (NeurIPS 2021)☆12Updated 2 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆29Updated 2 years ago
- ☆11Updated last year
- Code for "Rissanen Data Analysis: Examining Dataset Characteristics via Description Length" by Ethan Perez, Douwe Kiela, and Kyungyhun Ch…☆35Updated 3 years ago
- ☆44Updated 4 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- ☆86Updated last year
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆15Updated last year
- ☆20Updated 4 months ago
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 3 years ago
- ☆19Updated 4 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- ☆65Updated 6 months ago
- Code for the ICLR 2020 Paper, "A Theory of Usable Information under Computational Constraints"☆24Updated 4 years ago
- Code for the paper "Studying Large Language Model Behaviors Under Context-Memory Conflicts With Real Documentss"☆12Updated 3 months ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated last year
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆40Updated last year
- Code for the NeurIPS 2018 paper "On Controllable Sparse Alternatives to Softmax"☆22Updated 5 years ago
- LISA for ICML 2022☆47Updated last year
- Code for paper "Can contrastive learning avoid shortcut solutions?" NeurIPS 2021.☆47Updated 2 years ago