matt-seb-ho / WikiWhy
WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000+ "why" question-answer-rationale triplets.
☆46Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for WikiWhy
- ☆28Updated 8 months ago
- The project page for "SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables"☆19Updated 10 months ago
- ☆80Updated last year
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆19Updated last year
- ☆82Updated last year
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆16Updated 2 years ago
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆18Updated last year
- First explanation metric (diagnostic report) for text generation evaluation☆60Updated 3 months ago
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆42Updated last year
- We construct and introduce DIALFACT, a testing benchmark dataset crowd-annotated conversational claims, paired with pieces of evidence fr…☆41Updated 2 years ago
- ☆40Updated 11 months ago
- ☆15Updated 9 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆35Updated last year
- TBC☆26Updated 2 years ago
- ☆59Updated last year
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆13Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- Official implementation of the ACL 2023 paper: "Zero-shot Faithful Factual Error Correction"☆17Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago
- Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"☆62Updated 2 years ago
- ☆32Updated 2 years ago
- ☆36Updated 7 months ago
- ☆25Updated 7 months ago
- ☆30Updated 10 months ago
- Findings of EMNLP 2023: InfoCL: Alleviating Catastrophic Forgetting in Continual Text Classification from An Information Theoretic Perspe…☆13Updated 2 months ago
- ☆37Updated 6 months ago
- ☆43Updated last year
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆27Updated last year
- ☆24Updated last year
- ☆16Updated last year