floatai / HumanEval-XL
[LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization
☆37Updated last month
Alternatives and similar repositories for HumanEval-XL:
Users that are interested in HumanEval-XL are comparing it to the libraries listed below
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆62Updated 2 years ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆55Updated 7 months ago
- Generate the WizardCoder Instruct from the CodeAlpaca☆20Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆136Updated 8 months ago
- ☆107Updated 8 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆72Updated 9 months ago
- Reinforcement Learning for Repository-Level Code Completion☆29Updated 7 months ago
- ☆35Updated 9 months ago
- ☆13Updated 4 months ago
- ☆46Updated 2 years ago
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆22Updated 2 weeks ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆52Updated 5 months ago
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆52Updated last year
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆110Updated last year
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆30Updated 9 months ago
- Benchmark ClassEval for class-level code generation.☆138Updated 5 months ago
- ☆12Updated 7 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆152Updated 7 months ago
- [NeurIPS'24] SemCoder: Training Code Language Models with Comprehensive Semantics Reasoning☆21Updated 4 months ago
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories☆20Updated 7 months ago
- ☆28Updated 5 months ago
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆79Updated 6 months ago
- This is the official implement for the paper 'Domain Adaptive Code Completion via Language Models and Decoupled Domain Databases''☆14Updated last year
- ☆124Updated last year
- ☆42Updated last month
- Baselines for all tasks from Long Code Arena benchmarks 🏟️☆28Updated last week
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆64Updated 7 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆138Updated 3 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆135Updated 6 months ago
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)☆54Updated 9 months ago