CognitionAI / devin-swebench-resultsLinks
Cognition's results and methodology on SWE-bench
☆121Updated last year
Alternatives and similar repositories for devin-swebench-results
Users that are interested in devin-swebench-results are comparing it to the libraries listed below
Sorting:
- Harness used to benchmark aider against SWE Bench benchmarks☆72Updated last year
- ☆100Updated 2 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆201Updated 3 weeks ago
- This repository contains all the code for collecting large scale amounts of code from GitHub.☆110Updated 2 years ago
- ☆85Updated last year
- Run SWE-bench evaluations remotely☆34Updated last week
- Mixing Language Models with Self-Verification and Meta-Verification☆105Updated 7 months ago
- Agent computer interface for AI software engineer.☆97Updated this week
- accompanying material for sleep-time compute paper☆99Updated 3 months ago
- ☆100Updated last year
- Learning to Program with Natural Language☆6Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Pre-training code for CrystalCoder 7B LLM☆55Updated last year
- Beating the GAIA benchmark with Transformers Agents. 🚀☆131Updated 5 months ago
- Commit0: Library Generation from Scratch☆161Updated 3 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆59Updated 8 months ago
- ☆109Updated 3 months ago
- LLM reads a paper and produce a working prototype☆58Updated 3 months ago
- ☆159Updated 11 months ago
- ☆123Updated 11 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆223Updated last year
- ☆41Updated last year
- A set of utilities for running few-shot prompting experiments on large-language models☆122Updated last year
- Reasoning by Communicating with Agents☆29Updated 3 months ago
- ☆96Updated 10 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆87Updated 10 months ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆99Updated this week
- A codebase for "Language Models can Solve Computer Tasks"☆234Updated last year
- Accepted by Transactions on Machine Learning Research (TMLR)☆130Updated 10 months ago
- Multimodal computer agent data collection program☆141Updated last year