easonnie / ChaosNLILinks
[EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data
☆38Updated 3 years ago
Alternatives and similar repositories for ChaosNLI
Users that are interested in ChaosNLI are comparing it to the libraries listed below
Sorting:
- ☆42Updated 4 years ago
- ☆24Updated 4 years ago
- ☆59Updated 2 years ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆71Updated 11 months ago
- ☆39Updated 4 years ago
- ☆58Updated 3 years ago
- Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"☆22Updated 3 years ago
- ☆15Updated 4 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆22Updated 4 years ago
- Code and datasets for the EMNLP 2020 paper "Calibration of Pre-trained Transformers"☆61Updated 2 years ago
- Code for ACL 2020 paper: USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation (https://arxiv.org/pdf/2005.0045…☆50Updated 2 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 4 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- Automatic metrics for GEM tasks☆66Updated 2 years ago
- Debiasing Methods in Natural Language Understanding Make Bias More Accessible: Code and Data☆14Updated 3 years ago
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆84Updated 4 years ago
- ☆49Updated 2 years ago
- Pytorch implementation of DiffMask☆57Updated 2 years ago
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"☆66Updated 3 years ago
- ☆89Updated 3 years ago
- ☆46Updated 5 years ago
- ☆63Updated 5 years ago
- This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".☆80Updated 4 years ago
- ☆22Updated 3 years ago
- NAACL 2022: Can Rationalization Improve Robustness? https://arxiv.org/abs/2204.11790☆27Updated 2 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆97Updated 2 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated 2 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆91Updated 3 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- ☆45Updated last year