seketeam / DevEvalLinks
A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories
☆30Updated 11 months ago
Alternatives and similar repositories for DevEval
Users that are interested in DevEval are comparing it to the libraries listed below
Sorting:
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆63Updated last year
- Repo-Level Code generation papers☆199Updated last month
- Benchmark ClassEval for class-level code generation.☆145Updated 10 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆148Updated 7 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆80Updated last year
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆63Updated 3 years ago
- Reinforcement Learning for Repository-Level Code Completion☆36Updated last year
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆26Updated 5 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆38Updated 5 months ago
- CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context☆17Updated last week
- Pip compatible CodeBLEU metric implementation available for linux/macos/win☆105Updated 4 months ago
- ☆46Updated 3 years ago
- Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code", In P…