open-compass / DevEvalLinks
A Comprehensive Benchmark for Software Development.
β116Updated last year
Alternatives and similar repositories for DevEval
Users that are interested in DevEval are comparing it to the libraries listed below
Sorting:
- Reproducing R1 for Code with Reliable Rewardsβ267Updated 6 months ago
- [NeurIPS 2025 D&B] π SWE-bench Goes Live!β132Updated this week
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agentsβ185Updated 4 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srwβ64Updated last year
- β239Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".β83Updated last year
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolutionβ96Updated last month
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)β160Updated 2 months ago
- β¨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024β178Updated last year
- InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks (ICML 2024)β157Updated 5 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?β158Updated 11 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluationβ156Updated last year
- π AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and Interactive Coding Agent, ACL'24 Best Resourceβ¦β307Updated this week
- β54Updated last year
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agentsβ128Updated 7 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolvingβ278Updated last week
- A banchmark list for evaluation of large language models.β146Updated 2 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.β193Updated 6 months ago
- [ICLR 2025] Benchmarking Agentic Workflow Generationβ132Updated 8 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".β258Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graphβ221Updated 7 months ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)β95Updated 7 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)β152Updated last year
- A Comprehensive Survey on Long Context Language Modelingβ199Updated 4 months ago
- β153Updated 2 weeks ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)β263Updated last year
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.β248Updated 6 months ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.β164Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898β227Updated last year
- Official implementation of paper "On the Diagram of Thought" (https://arxiv.org/abs/2409.10038)β187Updated 2 months ago