CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)
☆174Aug 15, 2025Updated 6 months ago
Alternatives and similar repositories for cceval
Users that are interested in cceval are comparing it to the libraries listed below
Sorting:
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆189Aug 16, 2024Updated last year
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆34Mar 24, 2025Updated 11 months ago
- ☆59Jun 19, 2024Updated last year
- CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context☆18Feb 20, 2026Updated last week
- ☆672Nov 1, 2024Updated last year
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)☆66Jun 17, 2025Updated 8 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆62Oct 21, 2024Updated last year
- Benchmark ClassEval for class-level code generation.☆145Oct 24, 2024Updated last year
- A multi-programming language benchmark for LLMs☆298Jan 28, 2026Updated last month
- ☆18Apr 15, 2024Updated last year
- ☆126Apr 22, 2023Updated 2 years ago
- Collect simple coverage information in memory.☆11Oct 6, 2022Updated 3 years ago
- A framework for the evaluation of autoregressive code generation language models.☆1,020Jul 22, 2025Updated 7 months ago
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆58Mar 20, 2024Updated last year
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆168Nov 15, 2024Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Sep 13, 2025Updated 5 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆72Sep 3, 2024Updated last year
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,688Oct 2, 2025Updated 4 months ago
- NaturalCodeBench (Findings of ACL 2024)☆68Oct 14, 2024Updated last year
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆154Dec 25, 2024Updated last year
- Code for the paper "Efficient Training of Language Models to Fill in the Middle"☆199Apr 2, 2023Updated 2 years ago
- [TMLR] A curated list of language modeling researches for code (and other software engineering activities), plus related datasets.☆3,242Feb 1, 2026Updated last month
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆478Feb 5, 2025Updated last year
- EvoEval: Evolving Coding Benchmarks via LLM☆81Apr 6, 2024Updated last year
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆65Apr 18, 2022Updated 3 years ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆67Aug 15, 2024Updated last year
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆49Dec 22, 2023Updated 2 years ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆166Oct 11, 2024Updated last year
- Training and Benchmarking LLMs for Code Preference.☆38Nov 15, 2024Updated last year
- ☆69Dec 15, 2024Updated last year
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆117Jan 9, 2024Updated 2 years ago
- Making code edting up to 7.7x faster using multi-layer speculation☆24Feb 20, 2025Updated last year
- ☆232Dec 3, 2025Updated 2 months ago
- ☆44May 6, 2025Updated 9 months ago
- Data and code for "DocPrompting: Generating Code by Retrieving the Docs" @ICLR 2023☆251Dec 15, 2023Updated 2 years ago
- Evaluate LLM-synthesized @JuliaLang code.☆26Aug 17, 2024Updated last year
- ☆50Sep 6, 2023Updated 2 years ago
- ☆44Jun 24, 2025Updated 8 months ago
- ☆489Aug 15, 2024Updated last year