A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.
☆62Oct 21, 2024Updated last year
Alternatives and similar repositories for ExecEval
Users that are interested in ExecEval are comparing it to the libraries listed below
Sorting:
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆87Sep 17, 2024Updated last year
- This repository contains popular code generation frameworks such as MapCoder, CodeSIM.☆69Jun 24, 2025Updated 8 months ago
- ☆85Jun 13, 2023Updated 2 years ago
- ☆34Mar 5, 2026Updated 2 weeks ago
- ☆15Nov 12, 2025Updated 4 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆175Aug 15, 2025Updated 7 months ago
- [ACL 2024] CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and …☆102Jul 29, 2024Updated last year
- MapCoder: Multi-Agent Code Generation for Competitive Problem Solving☆186Feb 12, 2025Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Sep 13, 2025Updated 6 months ago
- A multi-programming language benchmark for LLMs☆299Jan 28, 2026Updated last month
- A framework for the evaluation of autoregressive code generation language models.☆1,021Jul 22, 2025Updated 8 months ago
- ☆16Dec 25, 2022Updated 3 years ago
- ☆44Jun 24, 2025Updated 8 months ago
- ☆44May 6, 2025Updated 10 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆154Dec 25, 2024Updated last year
- [ACL 2023] Modeling What-to-ask and How-to-ask for Answer-unaware Conversational Question Generation☆14Jul 11, 2023Updated 2 years ago
- Utilities for efficient fine-tuning, inference and evaluation of code generation models☆21Oct 3, 2023Updated 2 years ago
- Official codes for EMNLP 2024 paper "Multi-expert Prompting Improves Reliability, Safety and Usefulness of Large Language Models"☆38Dec 14, 2024Updated last year
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)☆22Nov 1, 2023Updated 2 years ago
- evol augment any dataset online☆61Aug 3, 2023Updated 2 years ago
- Open-source repository for the OOPSLA'24 paper "CYCLE: Learning to Self-Refine Code Generation"☆10Mar 8, 2024Updated 2 years ago
- ☆12Jun 20, 2023Updated 2 years ago
- ☆24Nov 19, 2024Updated last year
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆479Feb 5, 2025Updated last year
- ☆13Feb 11, 2021Updated 5 years ago
- Official codes for NAACL 2025 paper "LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias …☆11Nov 25, 2025Updated 3 months ago
- 针对大语言模型的对抗性攻击总结☆39Dec 22, 2023Updated 2 years ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆322Feb 24, 2025Updated last year
- ☆17Dec 4, 2024Updated last year
- [EMNLP’24 Main] Encoding and Controlling Global Semantics for Long-form Video Question Answering☆18Oct 9, 2024Updated last year
- ☆14Jul 5, 2024Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆60Aug 13, 2024Updated last year
- UnitEval is a benchmarking and evaluation tools for AutoDev Coder.☆13Jan 2, 2024Updated 2 years ago
- Concurrent-C to Rust Automatic Translator☆15Jan 26, 2023Updated 3 years ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆86Aug 10, 2024Updated last year
- Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024☆1,698Oct 2, 2025Updated 5 months ago
- Run evaluation on LLMs using human-eval benchmark☆428Sep 12, 2023Updated 2 years ago
- Code for the paper "Efficient Training of Language Models to Fill in the Middle"☆202Apr 2, 2023Updated 2 years ago
- homepage for proFL☆23Apr 26, 2021Updated 4 years ago