FloatAI / humaneval-xl
[LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization
☆33Updated last month
Alternatives and similar repositories for humaneval-xl:
Users that are interested in humaneval-xl are comparing it to the libraries listed below
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆49Updated 6 months ago
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆61Updated 2 years ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆130Updated 6 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆62Updated 7 months ago
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆29Updated 7 months ago
- ☆13Updated 2 months ago
- Reinforcement Learning for Repository-Level Code Completion☆22Updated 6 months ago
- ☆30Updated 8 months ago
- ☆105Updated 7 months ago
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆21Updated 8 months ago
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)☆51Updated 7 months ago
- ☆46Updated 2 years ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆47Updated 3 months ago
- ☆28Updated 3 months ago
- Baselines for all tasks from Long Code Arena benchmarks 🏟️☆27Updated 2 weeks ago
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆52Updated 11 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆144Updated 6 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆109Updated last year
- Generate the WizardCoder Instruct from the CodeAlpaca☆20Updated last year
- EvoEval: Evolving Coding Benchmarks via LLM☆66Updated 10 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 6 months ago
- This repo illustrates how to evaluate the artifacts in the paper An Extensive Study on Pre-trained Models for Program Understanding and G…☆25Updated 2 years ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆91Updated last month
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories☆19Updated 5 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆49Updated 6 months ago
- ☆52Updated 5 months ago
- Code and data for XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence☆68Updated last month
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆77Updated 10 months ago
- This is the official implement for the paper 'Domain Adaptive Code Completion via Language Models and Decoupled Domain Databases''☆13Updated last year
- Source Code Data Augmentation for Deep Learning: A Survey.☆64Updated 8 months ago