bigcode-project / bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
☆279Updated last week
Alternatives and similar repositories for bigcodebench:
Users that are interested in bigcodebench are comparing it to the libraries listed below
- Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"☆292Updated this week
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆141Updated 5 months ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆449Updated 4 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆122Updated 3 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆136Updated last month
- MapCoder: Multi-Agent Code Generation for Competitive Problem Solving☆102Updated 6 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆127Updated 6 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆127Updated 3 weeks ago
- A Comprehensive Benchmark for Software Development.☆89Updated 8 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆108Updated 2 months ago
- ☆209Updated 5 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆231Updated 3 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆104Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆233Updated 2 months ago
- ☆301Updated 4 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆136Updated last week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym☆251Updated 2 weeks ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆306Updated 4 months ago
- A simple unified framework for evaluating LLMs☆172Updated this week
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆293Updated 2 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆450Updated 10 months ago
- AWM: Agent Workflow Memory☆233Updated 2 months ago
- An Analytical Evaluation Board of Multi-turn LLM Agents☆272Updated 8 months ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆369Updated 6 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆210Updated 2 months ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆126Updated 4 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆296Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆62Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆494Updated this week
- ☆153Updated 5 months ago