peterbhase / InterpretableNLP-ACL2020
Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"
☆44Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for InterpretableNLP-ACL2020
- Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net…☆49Updated last year
- Tool for Evaluating Adversarial Perturbations on Text☆62Updated 2 years ago
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 3 years ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆69Updated 3 months ago
- ☆27Updated last year
- A Diagnostic Study of Explainability Techniques for Text Classification☆67Updated 4 years ago
- Demo for method introduced in "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"☆56Updated 4 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆57Updated 3 years ago
- This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"☆19Updated 2 years ago
- IPython notebook with synthetic experiments for AFLite, based on the ICML 2020 paper, "Adversarial Filters of Dataset Biases".☆16Updated 4 years ago
- OOD Generalization and Detection (ACL 2020)☆61Updated 4 years ago
- ☆51Updated 5 years ago
- ☆38Updated 4 years ago
- Code for ACL'20 paper "It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations"☆19Updated last year
- Learning the Difference that Makes a Difference with Counterfactually-Augmented Data☆168Updated 3 years ago
- Code for "Semantically Equivalent Adversarial Rules for Debugging NLP Models"☆86Updated 6 years ago
- Source code for "Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models", ICLR 2020.☆30Updated 4 years ago
- ☆24Updated 3 years ago
- Repository describing example random control tasks for designing and interpreting neural probes☆31Updated 2 years ago
- ☆14Updated 6 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆20Updated 4 years ago
- Repository for our ICLR 2019 paper: Discovery of Natural Language Concepts in Individual Units of CNNs☆27Updated 5 years ago
- Learn models that are robust to spurious correlations in the dataset.☆24Updated 4 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆97Updated 2 years ago
- Code accompanying our paper at AISTATS 2020☆21Updated 3 years ago
- Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 20…☆52Updated 4 years ago
- ☆47Updated 4 years ago
- Boolean Question Answering with multi-task learning and uses large LM embeddings like BERT, RoBERTa☆18Updated 5 years ago
- Deep Weighted Averaging Classifiers☆23Updated 5 years ago