open-compass / DevEval
A Comprehensive Benchmark for Software Development.
☆101Updated 10 months ago
Alternatives and similar repositories for DevEval:
Users that are interested in DevEval are comparing it to the libraries listed below
- Reproducing R1 for Code with Reliable Rewards☆163Updated this week
- [ACL 2024] AutoAct: Automatic Agent Learning from Scratch for QA via Self-Planning☆219Updated 3 months ago
- A Comprehensive Survey on Long Context Language Modeling☆126Updated 2 weeks ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆140Updated 2 weeks ago
- Official Implementation of Dynamic LLM-Agent Network: An LLM-agent Collaboration Framework with Agent Team Optimization☆135Updated 10 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆234Updated last month
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆123Updated 4 months ago
- InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks (ICML 2024)☆116Updated 3 months ago
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agents☆84Updated 2 weeks ago
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆39Updated this week
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆73Updated last month
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆99Updated 2 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆72Updated 9 months ago
- ☆218Updated 7 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆303Updated 10 months ago
- A banchmark list for evaluation of large language models.☆99Updated last month
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis"☆75Updated this week
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆181Updated 6 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆65Updated 4 months ago
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Dataset☆97Updated 9 months ago
- ☆313Updated 6 months ago
- ☆148Updated 3 months ago
- ☆61Updated 4 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆132Updated 5 months ago
- [NeurIPS 2024] Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?☆121Updated 7 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆131Updated 6 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆88Updated last month
- The official repository of the Omni-MATH benchmark.☆79Updated 3 months ago
- ☆121Updated 10 months ago
- NaturalCodeBench (Findings of ACL 2024)☆62Updated 6 months ago