velocityCavalry / CREPELinks
An original implementation of the paper "CREPE: Open-Domain Question Answering with False Presuppositions"
☆16Updated last year
Alternatives and similar repositories for CREPE
Users that are interested in CREPE are comparing it to the libraries listed below
Sorting:
- ☆15Updated 4 years ago
- ☆50Updated 2 years ago
- FRANK: Factuality Evaluation Benchmark☆59Updated 2 years ago
- ☆102Updated last year
- Faithfulness and factuality annotations of XSum summaries from our paper "On Faithfulness and Factuality in Abstractive Summarization" (h…☆84Updated 5 years ago
- ☆59Updated 2 years ago
- ☆29Updated last year
- ☆82Updated 2 years ago
- This dataset contains human judgements about answer equivalence. The data is based on SQuAD (Stanford Question Answering Dataset), and co…☆27Updated 3 years ago
- ☆42Updated 4 years ago
- ☆58Updated 3 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆42Updated last year
- ☆30Updated 4 years ago
- Detect hallucinated tokens for conditional sequence generation.☆64Updated 3 years ago
- Code Repo for the ACL21 paper "Common Sense Beyond English: Evaluating and Improving Multilingual LMs for Commonsense Reasoning"☆22Updated 4 years ago
- Dataset, metrics, and models for TACL 2023 paper MACSUM: Controllable Summarization with Mixed Attributes.☆34Updated 2 years ago
- Code for ACL 2020 paper: USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation (https://arxiv.org/pdf/2005.0045…☆50Updated 3 years ago
- Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"☆25Updated 2 years ago
- ☆71Updated 4 years ago
- code associated with ACL 2021 DExperts paper☆118Updated 2 years ago
- Data and code for "A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization" (ACL 2020)☆48Updated 2 years ago
- ☆54Updated 2 years ago
- This is the official repository for NAACL 2021, "XOR QA: Cross-lingual Open-Retrieval Question Answering".☆80Updated 4 years ago
- ☆47Updated last year
- ☆39Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- [EMNLP 2020] Collective HumAn OpinionS on Natural Language Inference Data☆40Updated 3 years ago
- ☆101Updated 3 years ago
- Code for paper "CrossFit : A Few-shot Learning Challenge for Cross-task Generalization in NLP" (https://arxiv.org/abs/2104.08835)☆113Updated 3 years ago
- Automatic metrics for GEM tasks☆67Updated 3 years ago