google-research-datasets / xsum_hallucination_annotationsLinks
Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (https://www.aclweb.org/anthology/2020.acl-main.173.pdf).
☆84Updated 5 years ago
Alternatives and similar repositories for xsum_hallucination_annotations
Users that are interested in xsum_hallucination_annotations are comparing it to the libraries listed below
Sorting:
- SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.☆148Updated 3 years ago
- REALSumm: Re-evaluating Evaluation in Text Summarization☆73Updated 2 months ago
- Dataset for NAACL 2021 paper: "DART: Open-Domain Structured Data Record to Text Generation"☆155Updated 3 years ago
- ☆59Updated 2 years ago
- FRANK: Factuality Evaluation Benchmark☆59Updated 2 years ago
- XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning☆104Updated 4 years ago
- ☆44Updated 4 years ago
- ☆102Updated last year
- Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)☆48Updated 2 years ago
- This repo supports various cross-lingual transfer learning & multilingual NLP models.☆92Updated 2 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated 2 years ago
- Code for ACL 2020 paper: USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation (https://arxiv.org/pdf/2005.0045…☆50Updated 2 years ago
- An original implementation of EMNLP 2020, "AmbigQA: Answering Ambiguous Open-domain Questions"☆120Updated 3 years ago
- Question Answering and Generation for Summarization☆71Updated 3 years ago
- ☆58Updated 3 years ago
- ☆28Updated 3 years ago
- ☆15Updated 4 years ago
- Codebase, data and models for the SummaC paper in TACL☆105Updated 10 months ago
- ☆46Updated 2 years ago
- Code and data accompanying our ACL 2020 paper, "Unsupervised Domain Clusters in Pretrained Language Models".☆58Updated 5 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆99Updated 3 years ago
- ☆39Updated 4 years ago