Official Github repo for the paper "Evaluating the Evaluation of Diversity in Natural Language Generation"
☆20Feb 23, 2021Updated 5 years ago
Alternatives and similar repositories for diversity-eval
Users that are interested in diversity-eval are comparing it to the libraries listed below
Sorting:
- Code for "Simulated Multiple Reference Training Improves Low-Resource Machine Translation"☆15Dec 1, 2020Updated 5 years ago
- ☆15Dec 12, 2024Updated last year
- Lightweight PDF Q&A tool powered by RAG (Retrieval-Augmented Generation) with MCP (Model Context Protocol) Support.☆22Oct 27, 2025Updated 4 months ago
- Analyze Argumentation and Rhetorical Aspects in Scientific Writing.☆19Nov 21, 2022Updated 3 years ago
- Benchmark for evaluating open-ended generation☆51Nov 6, 2024Updated last year
- Wikipedia based dataset to train relationship classifiers and fact extraction models☆26May 25, 2021Updated 4 years ago
- The dataset and statistical analysis code released with the submission of EMNLP 2017 paper "Why We Need New Evaluation Metrics for NLG"☆19Nov 16, 2021Updated 4 years ago
- FusedChat is a dialogue dataset. It contains dialogue sessions fusing task-oriented dialogues and open-domain dialogues.☆29Jul 20, 2022Updated 3 years ago
- Code for GenAug: Data Augmentation for Finetuning Text Generators.☆27Oct 8, 2021Updated 4 years ago
- Repository collecting resources and best practices to improve experimental rigour in deep learning research.☆27Mar 30, 2023Updated 2 years ago
- The codebase for our ACL2023 paper: Did You Read the Instructions? Rethinking the Effectiveness of Task Definitions in Instruction Learni…☆30Jul 16, 2023Updated 2 years ago
- Code for ACL 2021 paper "Unsupervised Out-of-Domain Detection via Pre-trained Transformers"