siegelz / core-benchLinks
☆48Updated 7 months ago
Alternatives and similar repositories for core-bench
Users that are interested in core-bench are comparing it to the libraries listed below
Sorting:
- ☆136Updated this week
- A benchmark that challenges language models to code solutions for scientific problems☆144Updated this week
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike stat…☆282Updated 2 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆184Updated 7 months ago
- Framework and toolkits for building and evaluating collaborative agents that can work together with humans.☆101Updated last week
- AWM: Agent Workflow Memory☆328Updated 8 months ago
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆76Updated last month
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆297Updated this week
- [ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery☆103Updated last month
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆102Updated 2 months ago
- Code for the paper 🌳 Tree Search for Language Model Agents☆217Updated last year
- ☆109Updated 5 months ago
- AgentLab: An open-source framework for developing, testing, and benchmarking web agents on diverse tasks, designed for scalability and re…☆417Updated this week
- SWE Arena☆34Updated 3 months ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆168Updated last week
- WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?☆210Updated 3 weeks ago
- Discovering Data-driven Hypotheses in the Wild☆113Updated 4 months ago
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆156Updated 2 months ago
- ☆307Updated last year
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆283Updated this week
- A virtual environment for developing and evaluating automated scientific discovery agents.☆188Updated 7 months ago
- ☆215Updated last year
- Source code for the collaborative reasoner research project at Meta FAIR.☆102Updated 5 months ago
- Reproducible, flexible LLM evaluations☆252Updated 3 months ago
- ☆280Updated 2 months ago
- (ACL 2025 Main) Code for MultiAgentBench : Evaluating the Collaboration and Competition of LLM agents https://www.arxiv.org/pdf/2503.019…☆169Updated last week
- Automatic evals for LLMs☆539Updated 3 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆418Updated this week
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆248Updated 2 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆106Updated 2 months ago