google-research-datasets / xsum_hallucination_annotationsLinks
Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (https://www.aclweb.org/anthology/2020.acl-main.173.pdf).
☆82Updated 4 years ago
Alternatives and similar repositories for xsum_hallucination_annotations
Users that are interested in xsum_hallucination_annotations are comparing it to the libraries listed below
Sorting:
- Dataset for NAACL 2021 paper: "DART: Open-Domain Structured Data Record to Text Generation"☆154Updated 2 years ago
- SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.☆144Updated 2 years ago
- Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)☆48Updated 2 years ago
- ☆59Updated 2 years ago
- XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning☆103Updated 4 years ago
- REALSumm: Re-evaluating Evaluation in Text Summarization☆71Updated 2 years ago
- FRANK: Factuality Evaluation Benchmark☆57Updated 2 years ago
- This repo supports various cross-lingual transfer learning & multilingual NLP models.☆92Updated last year
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆97Updated 2 years ago
- ☆98Updated last year
- ☆44Updated 4 years ago
- Semantic parsers based on encoder-decoder framework☆91Updated 2 years ago
- ☆71Updated 3 years ago
- ☆58Updated 3 years ago
- Heuristic Analysis for NLI Systems☆126Updated 4 years ago
- Code and Models for the paper "End-to-End Training of Multi-Document Reader and Retriever for Open-Domain Question Answering" (NeurIPS 20…☆109Updated 3 years ago
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"☆66Updated 3 years ago
- An original implementation of EMNLP 2020, "AmbigQA: Answering Ambiguous Open-domain Questions"☆119Updated 3 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated last year
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"☆204Updated 3 years ago
- ☆92Updated 3 years ago
- ☆48Updated 2 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆96Updated 2 years ago
- code associated with ACL 2021 DExperts paper☆115Updated 2 years ago
- Code and data accompanying our ACL 2020 paper, "Unsupervised Domain Clusters in Pretrained Language Models".☆58Updated 4 years ago
- ☆24Updated 3 years ago
- MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance☆208Updated last year
- Question Answering and Generation for Summarization☆71Updated 2 years ago
- Codebase, data and models for the SummaC paper in TACL☆97Updated 5 months ago
- ☆27Updated 2 years ago