bigscience-workshop / evaluation
Code and Data for Evaluation WG
☆41Updated 2 years ago
Alternatives and similar repositories for evaluation:
Users that are interested in evaluation are comparing it to the libraries listed below
- ☆75Updated 3 years ago
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆81Updated 4 years ago
- A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations☆55Updated 2 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆91Updated 2 years ago
- This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".☆79Updated 3 years ago
- Codebase, data and models for the Keep it Simple paper at ACL2021☆38Updated last year
- ☆22Updated 3 years ago
- Statistics on multilingual datasets☆17Updated 2 years ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorch☆75Updated 4 years ago
- EMNLP 2021 Tutorial: Multi-Domain Multilingual Question Answering☆38Updated 3 years ago
- Code to support the paper "Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets"☆66Updated 3 years ago
- Research code for the paper "How Good is Your Tokenizer? On the Monolingual Performance of Multilingual Language Models"☆26Updated 3 years ago
- Hyperparameter Search for AllenNLP☆137Updated 3 weeks ago
- This is a repository with the code for the EMNLP 2020 paper "Information-Theoretic Probing with Minimum Description Length"☆69Updated 7 months ago
- Automatic metrics for GEM tasks☆65Updated 2 years ago
- ☆46Updated 5 years ago
- QED: A Framework and Dataset for Explanations in Question Answering☆116Updated 3 years ago
- A BART version of an open-domain QA model in a closed-book setup☆119Updated 4 years ago
- Codebase for probing and visualizing multilingual models.☆47Updated 4 years ago
- Code & data for EMNLP 2020 paper "MOCHA: A Dataset for Training and Evaluating Reading Comprehension Metrics".☆16Updated 2 years ago
- A benchmark for understanding and evaluating rationales: http://www.eraserbenchmark.com/☆96Updated 2 years ago
- An original implementation of EMNLP 2020, "AmbigQA: Answering Ambiguous Open-domain Questions"☆118Updated 2 years ago
- XCOPA: A Multilingual Dataset for Causal Commonsense Reasoning☆101Updated 4 years ago
- Evaluating Machines by their Real-World Language Use☆33Updated last year
- ☆20Updated 2 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆96Updated 2 years ago
- SacreROUGE is a library dedicated to the use and development of text generation evaluation metrics with an emphasis on summarization.☆142Updated 2 years ago
- ☆68Updated 3 years ago
- REALSumm: Re-evaluating Evaluation in Text Summarization☆71Updated 2 years ago
- ☆29Updated last year