CoderEval / CoderEval
A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.
☆134Updated 2 months ago
Alternatives and similar repositories for CoderEval:
Users that are interested in CoderEval are comparing it to the libraries listed below
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆52Updated 6 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆133Updated 7 months ago
- Benchmark ClassEval for class-level code generation.☆135Updated 4 months ago
- Code and data for XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence☆68Updated last month
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆66Updated 8 months ago
- Repo-Level Code generation papers☆146Updated last week
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆146Updated 6 months ago
- A collection of practical code generation tasks and tests from open source projects. Complementary to HumanEval by OpenAI.☆24Updated 2 years ago
- ☆121Updated last year
- Pip compatible CodeBLEU metric implementation available for linux/macos/win☆80Updated last week
- A multi-programming language benchmark for LLMs☆235Updated last month
- Reinforcement Learning for Repository-Level Code Completion☆24Updated 6 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆236Updated 4 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆131Updated 5 months ago
- ☆31Updated 8 months ago
- Dianshu-Liao / AAA-Code-Generation-Framework-for-Code-Repository-Local-Aware-Global-Aware-Third-Party-Aware☆18Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆50Updated 4 months ago
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆22Updated 9 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆108Updated last year
- ☆106Updated 7 months ago
- Codev-Bench (Code Development Benchmark), a fine-grained, real-world, repository-level, and developer-centric evaluation framework. Codev…☆36Updated 4 months ago
- Large Language Models Meet NL2Code: A Survey☆36Updated 3 months ago
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆88Updated last year
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆35Updated this week
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆117Updated 3 months ago
- Large Language Models for Software Engineering☆211Updated this week
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆62Updated 2 years ago
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories☆20Updated 6 months ago
- Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code", In P …☆43Updated 9 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆78Updated 5 months ago