allenai / discoverybenchLinks
Discovering Data-driven Hypotheses in the Wild
☆122Updated 6 months ago
Alternatives and similar repositories for discoverybench
Users that are interested in discoverybench are comparing it to the libraries listed below
Sorting:
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆46Updated 9 months ago
- [ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery☆115Updated 3 months ago
- This is the official repository for HypoGeniC (Hypothesis Generation in Context) and HypoRefine, which are automated, data-driven tools t…☆96Updated last month
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆102Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆220Updated last month
- A virtual environment for developing and evaluating automated scientific discovery agents.☆193Updated 9 months ago
- ☆52Updated 9 months ago
- ☆321Updated last year
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 9 months ago
- ☆55Updated last month
- ☆124Updated 2 months ago
- A benchmark that challenges language models to code solutions for scientific problems☆158Updated last week
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 10 months ago
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆90Updated last year
- ☆51Updated last year
- ☆129Updated last year
- This repository contains ScholarQABench data and evaluation pipeline.☆90Updated 4 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆220Updated 6 months ago
- Optimize Any User-defined Compound AI Systems☆63Updated 4 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆122Updated last year
- Evaluating LLMs with fewer examples☆170Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆76Updated last year
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- ☆202Updated 2 weeks ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- A mechanistic approach for understanding and detecting factual errors of large language models.☆49Updated last year
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆113Updated 2 weeks ago