Yale-LILY / SummEval
Resources for the "SummEval: Re-evaluating Summarization Evaluation" paper
☆373Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for SummEval
- Resources for the "Evaluating the Factual Consistency of Abstractive Text Summarization" paper☆285Updated last year
- MoverScore: Text Generation Evaluating with Contextualized Embeddings and Earth Mover Distance☆199Updated last year
- The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization☆153Updated 2 years ago
- UnifiedQA: Crossing Format Boundaries With a Single QA System☆428Updated 2 years ago
- Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive…☆428Updated last year
- BARTScore: Evaluating Generated Text as Text Generation☆325Updated 2 years ago
- DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue☆281Updated last year
- Code and data to support the paper "PAQ 65 Million Probably-Asked Questions andWhat You Can Do With Them"☆202Updated 3 years ago
- Search Engines with Autoregressive Language models☆277Updated last year
- Large-scale multi-document summarization dataset and code☆276Updated last year
- ☆344Updated 3 years ago
- a gaggle of deep neural architectures for text ranking and question answering, designed for Pyserini☆340Updated 11 months ago
- SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.☆138Updated 2 years ago
- Resources for the NAACL 2018 paper "A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents"☆357Updated last year
- Interpretable Evaluation for AI Systems☆361Updated last year
- This repository contains the code for "Generating Datasets with Pretrained Language Models".