chaochun / nlu-asdiv-dataset
☆47Updated last year
Alternatives and similar repositories for nlu-asdiv-dataset:
Users that are interested in nlu-asdiv-dataset are comparing it to the libraries listed below
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆69Updated 2 years ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 8 months ago
- ☆85Updated 2 years ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆39Updated 2 years ago
- ☆44Updated 6 months ago
- ☆16Updated 2 weeks ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- ☆61Updated 2 years ago
- ☆75Updated last year
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆98Updated 2 years ago
- [EMNLP 2022] Code and data for "Controllable Dialogue Simulation with In-Context Learning"☆34Updated 2 years ago
- ☆73Updated 10 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆58Updated 2 years ago
- ☆41Updated last year
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆17Updated 2 years ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆67Updated 11 months ago
- ☆48Updated 11 months ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆24Updated last year
- Technical Report: Is ChatGPT a Good NLG Evaluator? A Preliminary Study☆43Updated 2 years ago
- Supporting code for ReCEval paper☆28Updated 6 months ago
- ☆86Updated last year
- ☆29Updated 2 months ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆76Updated last year
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000…☆47Updated last year
- ☆82Updated last year
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆22Updated 7 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆130Updated last year
- ☆12Updated last year