danieldeutsch / qaevalLinks
☆15Updated 4 years ago
Alternatives and similar repositories for qaeval
Users that are interested in qaeval are comparing it to the libraries listed below
Sorting:
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆84Updated 5 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated 2 years ago
- REALSumm: Re-evaluating Evaluation in Text Summarization☆73Updated 3 months ago
- ☆50Updated 2 years ago
- Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)☆48Updated 2 years ago
- ☆59Updated 2 years ago
- Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"☆23Updated 4 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 4 years ago
- ☆30Updated 4 years ago
- [EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data☆40Updated 3 years ago
- ☆46Updated 2 years ago
- Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources☆33Updated 3 years ago
- Supporting code for the EMNLP 2019 paper "Answers Unite! Unsupervised Metrics for Reinforced Summarization Models"☆14Updated 2 years ago
- ☆29Updated last year
- ☆55Updated 2 years ago
- ☆58Updated 3 years ago
- FRANK: Factuality Evaluation Benchmark☆59Updated 3 years ago
- ☆102Updated last year
- ☆42Updated 4 years ago
- An original implementation of the paper "CREPE: Open-Domain Question Answering with False Presuppositions"☆16Updated last year
- ☆28Updated 3 years ago
- Semantic parsers based on encoder-decoder framework☆91Updated 2 years ago
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"☆65Updated 4 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆93Updated 3 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- ☆35Updated 4 years ago
- Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https…☆44Updated last year
- ☆39Updated 4 years ago
- XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning☆104Updated 4 years ago
- SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.☆148Updated 3 years ago