A comprehensive code domain benchmark review of LLM researches.
☆203Sep 22, 2025Updated 5 months ago
Alternatives and similar repositories for Awesome-Code-Benchmark
Users that are interested in Awesome-Code-Benchmark are comparing it to the libraries listed below
Sorting:
- ☆13Oct 11, 2024Updated last year
- ☆12Nov 5, 2024Updated last year
- A curated list of products, benchmarks, and research papers on autonomous code agents. Beyond coding — they're redefining how software ch…☆85Updated this week
- ☆15Feb 24, 2021Updated 5 years ago
- Code repository for "RL Grokking Recipe: How RL Unlocks and Transfers New Algorithms in LLMs""☆30Oct 12, 2025Updated 4 months ago
- The Infibench variant of bigcode-evaluation-harness --- a framework for the evaluation of autoregressive code generation language models.☆14Oct 19, 2024Updated last year
- Automated Benchmarking of LLM Agents on Real-World Software Security Tasks [NeurIPS 2025]☆56Jan 27, 2026Updated last month
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆14Apr 9, 2025Updated 10 months ago
- The data for the CRASS-benchmark☆16Oct 24, 2022Updated 3 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆45Jun 14, 2024Updated last year
- Must-read papers on Repository-level Code Generation & Issue Resolution 🔥☆259Dec 22, 2025Updated 2 months ago
- ☆19Jun 13, 2024Updated last year
- /slash is an open-source, mobile-first GitHub assistant powered by AI. Browse repos, review code, write or edit files with AI, and push c…☆20Aug 20, 2025Updated 6 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆577Updated this week
- ☆58Jun 30, 2023Updated 2 years ago
- Source code for Grounded Adaptation for Zero-shot Executable Semantic Parsing☆21Feb 1, 2021Updated 5 years ago
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)