bigcode-project / bigcodebenchLinks
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
☆383Updated 2 months ago
Alternatives and similar repositories for bigcodebench
Users that are interested in bigcodebench are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆554Updated this week
- Official codebase for "SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution"☆551Updated 3 months ago
- Scaling Data for SWE-agents☆256Updated this week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆489Updated last month
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆245Updated 7 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆167Updated 10 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆186Updated this week
- ☆97Updated 11 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆145Updated 8 months ago
- RewardBench: the first evaluation tool for reward models.☆604Updated 2 weeks ago
- MapCoder: Multi-Agent Code Generation for Competitive Problem Solving☆151Updated 4 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆143Updated 10 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆326Updated last year
- A Comprehensive Benchmark for Software Development.☆110Updated last year
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆306Updated 4 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆195Updated this week
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆220Updated last year
- ☆291Updated 11 months ago
- Automatic evals for LLMs☆437Updated 2 weeks ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆254Updated 3 months ago
- Run evaluation on LLMs using human-eval benchmark☆414Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆109Updated 7 months ago
- Reproducing R1 for Code with Reliable Rewards☆221Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆226Updated 7 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆184Updated 2 months ago
- AWM: Agent Workflow Memory☆279Updated 4 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆177Updated this week
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆215Updated last month
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆219Updated last month
- ☆232Updated 10 months ago