siegelz / core-benchLinks
☆52Updated 8 months ago
Alternatives and similar repositories for core-bench
Users that are interested in core-bench are comparing it to the libraries listed below
Sorting:
- ☆180Updated last week
- [ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery☆107Updated 2 months ago
- ☆117Updated 3 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆348Updated this week
- A benchmark that challenges language models to code solutions for scientific problems☆153Updated last week
- A virtual environment for developing and evaluating automated scientific discovery agents.☆189Updated 8 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆299Updated last week
- Discovering Data-driven Hypotheses in the Wild☆117Updated 5 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆304Updated 2 weeks ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆107Updated 2 weeks ago
- ☆224Updated last week
- Reproducible, flexible LLM evaluations☆264Updated 2 weeks ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆233Updated 3 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆442Updated last week
- Collection of evals for Inspect AI☆280Updated last week
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- Evaluation of LLMs on latest math competitions☆178Updated 3 weeks ago
- AWM: Agent Workflow Memory☆353Updated 9 months ago
- Train your own SOTA deductive reasoning model☆108Updated 8 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆183Updated 2 weeks ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆171Updated 2 weeks ago
- ☆135Updated 7 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆105Updated 6 months ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆217Updated last year
- WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?☆218Updated 2 weeks ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 3 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 9 months ago
- CodeScientist: An automated scientific discovery system for code-based experiments☆299Updated 4 months ago