allenai / understanding_mcqaLinks
Code for the arXiv preprint "Answer, Assemble, Ace: Understanding How Transformers Answer Multiple Choice Questions"
☆14Updated 2 months ago
Alternatives and similar repositories for understanding_mcqa
Users that are interested in understanding_mcqa are comparing it to the libraries listed below
Sorting:
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆93Updated last year
- Evaluating the Moral Beliefs Encoded in LLMs☆30Updated 9 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ⚓️ Repository for the "Thought Anchors: Which LLM Reasoning Steps Matter?" paper.☆84Updated last month
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆158Updated 7 months ago
- [EMNLP '23] Discriminator-Guided Chain-of-Thought Reasoning☆49Updated 11 months ago
- Official repository for ODQA experiments from Decomposed Prompting: A Modular Approach for Solving Complex Tasks, ICLR23☆11Updated 2 years ago
- Agent Skill Induction: "Inducing Programmatic Skills for Agentic Tasks"☆29Updated 5 months ago
- ☆52Updated 11 months ago
- Evaluating LLMs with fewer examples☆161Updated last year
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆42Updated 6 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- About The corresponding code from our paper " Making Reasoning Matter: Measuring and Improving Faithfulness of Chain-of-Thought Reasoning…☆12Updated last year
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆34Updated 5 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆106Updated 2 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 3 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 8 months ago
- [COLM 2025] EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees☆24Updated 2 months ago
- ☆78Updated 2 weeks ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆99Updated last week
- ☆29Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 9 months ago
- The repository contains the code and dataset for the Socratic Debugging task which is a novel task for Socratically Questioning Novice De…☆18Updated last year
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆50Updated last month
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆64Updated 9 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆93Updated 4 months ago
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆43Updated 7 months ago
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆47Updated 10 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆204Updated 10 months ago