ellaneeman / disent_qa
This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.
☆17Updated last year
Alternatives and similar repositories for disent_qa:
Users that are interested in disent_qa are comparing it to the libraries listed below
- ☆33Updated 2 years ago
- ☆28Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆57Updated last year
- ☆75Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆65Updated 10 months ago
- ☆82Updated last year
- We construct and introduce DIALFACT, a testing benchmark dataset crowd-annotated conversational claims, paired with pieces of evidence fr…☆41Updated 2 years ago
- ☆85Updated last year
- Dataset for Unified Editing, EMNLP 2023. This is a model editing dataset where edits are natural language phrases.☆23Updated 5 months ago
- ☆16Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated last year
- Methods and evaluation for aligning language models temporally☆27Updated 11 months ago
- [ACL 2023] Learning Multi-step Reasoning by Solving Arithmetic Tasks. https://arxiv.org/abs/2306.01707☆24Updated last year
- ☆72Updated 8 months ago
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆43Updated last year
- Code for Aesop: Paraphrase Generation with Adaptive Syntactic Control (EMNLP 2021)☆27Updated 3 years ago
- ☆35Updated 3 months ago
- Code and data for the FACTOR paper☆44Updated last year
- The project page for "SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables"☆19Updated last year
- ☆85Updated 2 years ago
- Official implementation of the ACL 2023 paper: "Zero-shot Faithful Factual Error Correction"☆17Updated last year
- ☆40Updated last year
- ☆19Updated 2 years ago
- ☆37Updated last year
- "FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning" (ACL 2023)☆13Updated last year
- ☆52Updated 5 months ago
- ☆31Updated last year
- TBC☆26Updated 2 years ago
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆47Updated last year
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year