tongye98 / Awesome-Code-BenchmarkLinks
A comprehensive code domain benchmark review of LLM researches.
☆176Updated 3 months ago
Alternatives and similar repositories for Awesome-Code-Benchmark
Users that are interested in Awesome-Code-Benchmark are comparing it to the libraries listed below
Sorting:
- Must-read papers on Repository-level Code Generation & Issue Resolution 🔥☆227Updated this week
- Reproducing R1 for Code with Reliable Rewards☆278Updated 7 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆67Updated last year
- Repo-Level Code generation papers☆226Updated last week
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆85Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆164Updated 4 months ago
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆101Updated 3 months ago
- A Comprehensive Benchmark for Software Development.☆124Updated last year
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆163Updated last year
- [TOSEM'25] The official GitHub page for the survey paper "A Survey on Large Language Models for Code Generation".☆176Updated 5 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆294Updated last week
- [EMNLP 2024] CodeJudge: Evaluating Code Generation with Large Language Models☆53Updated last month
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories☆35Updated last year
- ☆44Updated last month
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆40Updated 9 months ago
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)☆64Updated 6 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆80Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆163Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆237Updated 8 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆217Updated 5 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆182Updated last year
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆30Updated 4 months ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆95Updated 9 months ago
- Benchmark ClassEval for class-level code generation.☆146Updated last year
- [ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI☆461Updated 2 months ago
- ☆25Updated 4 months ago
- Baselines for all tasks from Long Code Arena benchmarks 🏟️☆38Updated 8 months ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆48Updated last month
- [NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!☆146Updated this week
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆61Updated last year