CoderEval / CoderEval
A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.
☆121Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for CoderEval
- Repo-Level Code generation papers☆94Updated 5 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆46Updated 3 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆122Updated 3 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆57Updated 4 months ago
- Code and data for XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence☆66Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆133Updated 3 months ago
- Benchmark ClassEval for class-level code generation.☆126Updated 3 weeks ago
- ☆117Updated last year
- A multi-programming language benchmark for LLMs☆207Updated this week
- A collection of practical code generation tasks and tests from open source projects. Complementary to HumanEval by OpenAI.☆22Updated last year
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆222Updated 3 weeks ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆84Updated this week
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆21Updated 5 months ago
- Reinforcement Learning for Repository-Level Code Completion☆15Updated 3 months ago
- ☆23Updated 5 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆115Updated last month
- Enhancing AI Software Engineering with Repository-level Code Graph☆96Updated 2 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆28Updated 2 months ago
- Aix-bench, the Java benchmark for code synthesis problem.☆51Updated 2 years ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆42Updated last month
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆27Updated 4 months ago
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆80Updated 10 months ago
- ☆17Updated 4 months ago
- Pip compatible CodeBLEU metric implementation available for linux/macos/win☆64Updated this week
- ☆101Updated 4 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆97Updated 10 months ago
- Official implementation of our ICSE 2023 paper on Automatic Code Generation.☆23Updated last year
- Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code", In P…☆40Updated 5 months ago
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆59Updated 2 years ago
- ☆50Updated 5 months ago