microsoft / FEA-BenchLinks
[ACL25] FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation
☆41Updated 2 weeks ago
Alternatives and similar repositories for FEA-Bench
Users that are interested in FEA-Bench are comparing it to the libraries listed below
Sorting:
- ☆12Updated 6 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆169Updated 5 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆85Updated last year
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆165Updated last year
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆67Updated last year
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆311Updated last month
- Official repository for the paper "COAST: Enhancing the Code Debugging Ability of LLMs through Communicative Agent Based Data Synthesis".☆17Updated 11 months ago
- A Comprehensive Benchmark for Software Development.☆127Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆184Updated last year
- Repo-Level Code generation papers☆231Updated last month
- LeetCode Training and Evaluation Dataset☆46Updated 9 months ago
- Must-read papers on Repository-level Code Generation & Issue Resolution 🔥☆244Updated last month
- ☆46Updated 3 months ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆95Updated 10 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆40Updated 10 months ago
- Reproducing R1 for Code with Reliable Rewards☆282Updated 8 months ago
- [NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!☆161Updated this week
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆154Updated last year
- ☆163Updated 3 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆209Updated 9 months ago
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories☆36Updated last year
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆780Updated 6 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆264Updated last year
- Collection of papers for scalable automated alignment.☆93Updated last year
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆47Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆248Updated 9 months ago
- A comprehensive code domain benchmark review of LLM researches.☆189Updated 4 months ago
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆473Updated 3 weeks ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆229Updated 6 months ago
- ☆68Updated last year