open-compass / DevEvalLinks
A Comprehensive Benchmark for Software Development.
β127Updated last year
Alternatives and similar repositories for DevEval
Users that are interested in DevEval are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025 D&B] π SWE-bench Goes Live!β161Updated this week
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agentsβ229Updated 6 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?β166Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".β85Updated last year
- Reproducing R1 for Code with Reliable Rewardsβ285Updated 8 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ164Updated last year
- InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks (ICML 2024)β179Updated 8 months ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β186Updated last year
- β56Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β170Updated 5 months ago
- β242Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srwβ64Updated last year
- π AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resourceβ¦β367Updated 2 months ago
- β104Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β265Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Zihaβ¦β132Updated last year
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planningβ233Updated last year
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolutionβ104Updated 4 months ago
- A banchmark list for evaluation of large language models.β157Updated 2 weeks ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]β389Updated last year
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.β172Updated last year
- NaturalCodeBench (Findings of ACL 2024)β69Updated last year
- eβ43Updated 9 months ago
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agentsβ133Updated 10 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasksβ260Updated 8 months ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratingsβ62Updated last year
- [NeurIPS 2025 Spotlight] Co-Evolving LLM Coder and Unit Tester via Reinforcement Learningβ150Updated 4 months ago
- Enhancing AI Software Engineering with Repository-level Code Graphβ248Updated 10 months ago
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?β136Updated last year
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrievalβ87Updated last year