matt-seb-ho / WikiWhy
WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000+ "why" question-answer-rationale triplets.
☆46Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for WikiWhy
- ☆83Updated last year
- ☆28Updated 9 months ago
- Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"☆63Updated 2 years ago
- The project page for "SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables"☆19Updated 11 months ago
- Official implementation of the ACL 2023 paper: "Zero-shot Faithful Factual Error Correction"☆17Updated last year
- Codes for ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆28Updated last year
- The Shifted and The Overlooked: A Task-oriented Investigation of User-GPT Interactions (EMNLP 2023))☆13Updated 11 months ago
- ☆36Updated 7 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- [EMNLP 2022] Code for our paper “ZeroGen: Efficient Zero-shot Learning via Dataset Generation”.☆16Updated 2 years ago
- ☆33Updated 2 years ago
- ☆40Updated 11 months ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆55Updated last year
- Resources for Retrieval Augmentation for Commonsense Reasoning: A Unified Approach. EMNLP 2022.☆20Updated last year
- First explanation metric (diagnostic report) for text generation evaluation☆61Updated 4 months ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆35Updated last year
- ☆46Updated 10 months ago
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆18Updated last year
- Official code repository for the main conference paper in ACL2023: COLA: Contextualized Commonsense Causality Reasoning from the Causal I…☆25Updated last year
- ☆65Updated 5 months ago
- ☆80Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆47Updated 4 months ago
- We construct and introduce DIALFACT, a testing benchmark dataset crowd-annotated conversational claims, paired with pieces of evidence fr…☆41Updated 2 years ago
- ☆13Updated 2 years ago
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆42Updated last year
- The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning (NeurIPS 2022)☆14Updated last year
- Codes for the EMNLP2021 paper: Benchmarking Commonsense Knowledge Base Population (https://aclanthology.org/2021.emnlp-main.705.pdf). An …☆26Updated 9 months ago
- Repo for paper: Controllable Text Generation with Language Constraints☆19Updated last year
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆27Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆54Updated 10 months ago