peterbhase / InterpretableNLP-ACL2020Links
Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"
☆45Updated last year
Alternatives and similar repositories for InterpretableNLP-ACL2020
Users that are interested in InterpretableNLP-ACL2020 are comparing it to the libraries listed below
Sorting:
- Learning the Difference that Makes a Difference with Counterfactually-Augmented Data☆170Updated 4 years ago
- Tool for Evaluating Adversarial Perturbations on Text☆61Updated 3 years ago
- ☆27Updated 2 years ago
- Code for "Semantically Equivalent Adversarial Rules for Debugging NLP Models"☆87Updated 6 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 4 years ago
- Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net…☆50Updated 2 years ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆71Updated 11 months ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆97Updated 2 years ago
- A Diagnostic Study of Explainability Techniques for Text Classification☆68Updated 4 years ago
- Text classification models. Used a submodule for other projects.☆68Updated 6 years ago
- Interpretable Neural Predictions with Differentiable Binary Variables☆84Updated 4 years ago
- OOD Generalization and Detection (ACL 2020)☆60Updated 5 years ago
- Materials for the EMNLP 2020 Tutorial on "Interpreting Predictions of NLP Models"☆199Updated 4 years ago
- Source code for "Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models", ICLR 2020.☆30Updated 5 years ago
- Code for the ACL 2018 paper "Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context"☆54Updated 7 years ago
- ☆89Updated 3 months ago
- ☆64Updated 3 years ago
- ☆66Updated 2 years ago
- The accompanying code for "Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understandin…☆21Updated 5 years ago
- Checking the interpretability of attention on text classification models☆49Updated 6 years ago
- ☆63Updated 5 years ago
- Code for Paper: Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data☆35Updated 4 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆82Updated last year
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 4 years ago
- Companion site for "Analysis Methods in Neural Language Processing: A Survey"☆66Updated 5 years ago
- Repository for our ICLR 2019 paper: Discovery of Natural Language Concepts in Individual Units of CNNs☆26Updated 6 years ago
- Code for ACL'20 paper "It's Morphin' Time! Combating Linguistic Discrimination with Inflectional Perturbations"☆19Updated 3 months ago
- This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"☆19Updated 3 years ago
- NLI test set with lexical inferences☆49Updated 6 years ago
- Demo for method introduced in "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"☆56Updated 5 years ago