amazon-science / mxevalLinks
☆110Updated last year
Alternatives and similar repositories for mxeval
Users that are interested in mxeval are comparing it to the libraries listed below
Sorting:
- Releasing code for "ReCode: Robustness Evaluation of Code Generation Models"☆52Updated last year
- ☆124Updated 2 years ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆155Updated last month
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆170Updated last year
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆114Updated last year
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆153Updated 11 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆38Updated 6 months ago
- [ICML 2023] Data and code release for the paper "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation".☆255Updated 10 months ago
- CodeBERTScore: an automatic metric for code generation, based on BERTScore☆199Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆56Updated 10 months ago
- Source codes for paper ”ReACC: A Retrieval-Augmented Code Completion Framework“☆63Updated 3 years ago
- A multi-programming language benchmark for LLMs☆269Updated last month
- Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.