open-compass / DevEvalLinks
A Comprehensive Benchmark for Software Development.
β121Updated last year
Alternatives and similar repositories for DevEval
Users that are interested in DevEval are comparing it to the libraries listed below
Sorting:
- Reproducing R1 for Code with Reliable Rewardsβ275Updated 6 months ago
- [NeurIPS 2025 D&B] π SWE-bench Goes Live!β139Updated last week
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".β84Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ161Updated last year
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agentsβ197Updated 4 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srwβ64Updated last year
- CodeRAG-Bench: Can Retrieval Augment Code Generation?β161Updated last year
- InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks (ICML 2024)β160Updated 6 months ago
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agentsβ129Updated 8 months ago
- NaturalCodeBench (Findings of ACL 2024)β67Updated last year
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)β95Updated 8 months ago
- π AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resourceβ¦β324Updated 2 weeks ago
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolutionβ98Updated 2 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β164Updated 3 months ago
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planningβ231Updated 10 months ago
- β32Updated 6 months ago
- β54Updated last year
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β181Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β259Updated last year
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimizationβ181Updated last year
- β241Updated last year
- β127Updated 6 months ago
- β175Updated last month
- [ICLR 2025] Benchmarking Agentic Workflow Generationβ135Updated 9 months ago
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?β134Updated last year
- A banchmark list for evaluation of large language models.β151Updated 2 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)β159Updated last year
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.β198Updated 7 months ago
- MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Modelsβ57Updated 4 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]β366Updated last year