asaparov / prontoqaLinks
Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.
☆150Updated last month
Alternatives and similar repositories for prontoqa
Users that are interested in prontoqa are comparing it to the libraries listed below
Sorting:
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated 2 years ago
- ☆177Updated last year
- ☆57Updated 4 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆131Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆111Updated 2 years ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆102Updated 2 years ago
- A unified benchmark for math reasoning☆88Updated 2 years ago
- ☆189Updated 3 months ago
- ☆86Updated 2 years ago
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆80Updated last year
- The official code of TACL 2021, "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies".☆80Updated 2 years ago
- ☆44Updated last year
- ☆52Updated last year
- ☆78Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- ☆75Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆130Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- Supporting code for ReCEval paper☆30Updated last year
- ☆115Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆116Updated last year
- ☆97Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 2 years ago
- ☆82Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆97Updated 4 years ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- ☆49Updated 2 years ago
- ☆103Updated last year