danieldeutsch / qaevalLinks
☆15Updated 4 years ago
Alternatives and similar repositories for qaeval
Users that are interested in qaeval are comparing it to the libraries listed below
Sorting:
- Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)☆48Updated 2 years ago
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆84Updated 4 years ago
- REALSumm: Re-evaluating Evaluation in Text Summarization☆71Updated 2 years ago
- FRANK: Factuality Evaluation Benchmark☆57Updated 2 years ago
- ☆99Updated last year
- ☆51Updated 2 years ago
- ☆46Updated 2 years ago
- ☆59Updated 2 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 4 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated 2 years ago
- ☆30Updated 3 years ago
- ☆58Updated 3 years ago
- ☆50Updated 2 years ago
- [EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data☆38Updated 3 years ago
- Code for ACL 21: Generating Query Focused Summaries from Query-Free Resources☆33Updated 3 years ago
- This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".☆80Updated 4 years ago
- Code for ACL 2020 paper: USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation (https://arxiv.org/pdf/2005.0045…☆50Updated 2 years ago
- ☆42Updated 4 years ago
- Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"☆22Updated 3 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆91Updated 3 years ago
- ☆46Updated 5 years ago
- ☆28Updated 2 years ago
- ☆39Updated 2 years ago
- Source code for "Transforming Question Answering Datasets Into Natural Language Inference Datasets"☆62Updated 6 years ago
- A reference-free metric for measuring summary quality, learned from human ratings.☆43Updated 2 years ago
- SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.☆144Updated 2 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆97Updated 2 years ago
- Code and data for "Retrieval Enhanced Model for Commonsense Generation" (ACL-IJCNLP 2021).☆28Updated 3 years ago
- Code for NAACL 2021 full paper "Efficient Attentions for Long Document Summarization"☆67Updated 4 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago