scicode-bench / SciCodeLinks
A benchmark that challenges language models to code solutions for scientific problems
☆140Updated last week
Alternatives and similar repositories for SciCode
Users that are interested in SciCode are comparing it to the libraries listed below
Sorting:
- Repository for the paper Stream of Search: Learning to Search in Language☆150Updated 7 months ago
- Evaluation of LLMs on latest math competitions☆162Updated last month
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆106Updated last month
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆182Updated 6 months ago
- Can Language Models Solve Olympiad Programming?☆118Updated 8 months ago
- [ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery☆100Updated 3 weeks ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆229Updated last month
- ☆73Updated 3 weeks ago
- Replicating O1 inference-time scaling laws☆89Updated 9 months ago
- A simple unified framework for evaluating LLMs☆245Updated 5 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 11 months ago
- ☆111Updated 3 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆224Updated last year
- ☆122Updated 6 months ago
- ☆38Updated 5 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆272Updated last week
- Benchmarking LLMs with Challenging Tasks from Real Users☆241Updated 10 months ago
- SWE Arena☆34Updated 2 months ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆215Updated last month
- ☆116Updated 4 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆97Updated 5 months ago
- A banchmark list for evaluation of large language models.☆140Updated last week
- A virtual environment for developing and evaluating automated scientific discovery agents.☆183Updated 6 months ago
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆74Updated 3 months ago
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆75Updated last month
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆245Updated last month
- ☆190Updated 4 months ago
- ☆84Updated 7 months ago
- Discovering Data-driven Hypotheses in the Wild☆110Updated 3 months ago
- The code for the paper: "Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models"☆54Updated last year