ThomasScialom / QuestEvalLinks
☆102Updated last year
Alternatives and similar repositories for QuestEval
Users that are interested in QuestEval are comparing it to the libraries listed below
Sorting:
- ☆55Updated 2 years ago
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆84Updated 5 years ago
- SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.☆148Updated 3 years ago
- An original implementation of EMNLP 2020, "AmbigQA: Answering Ambiguous Open-domain Questions"☆120Updated 3 years ago
- Codebase, data and models for the SummaC paper in TACL☆107Updated 11 months ago
- Dataset for NAACL 2021 paper: "DART: Open-Domain Structured Data Record to Text Generation"☆157Updated 3 years ago
- ☆71Updated 4 years ago
- FRANK: Factuality Evaluation Benchmark☆59Updated 3 years ago
- Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)☆48Updated 2 years ago
- This repository contains the code for "Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP".☆89Updated 4 years ago
- XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning☆104Updated 4 years ago
- The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization☆157Updated 3 years ago
- Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper☆307Updated 8 months ago
- Official code repository for "Exploring Neural Models for Query-Focused Summarization".☆51Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- ☆50Updated 2 years ago
- ☆58Updated 3 years ago
- ☆15Updated 4 years ago
- Detect hallucinated tokens for conditional sequence generation.☆64Updated 3 years ago
- code associated with ACL 2021 DExperts paper☆118Updated 2 years ago
- ☆63Updated 3 years ago
- MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance☆209Updated 2 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆101Updated 3 years ago
- ☆78Updated last year
- Contrastive Fact Verification☆73Updated 3 years ago
- The data and code for EmailSum☆63Updated 4 years ago
- REALSumm: Re-evaluating Evaluation in Text Summarization☆73Updated 3 months ago
- ☆91Updated 3 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated 2 years ago
- Automatic metrics for GEM tasks☆67Updated 3 years ago