FSoft-AI4Code / RepoExec
Benchmark for Repository-Level Code Generation, focus on Executability, Correctness from Test Cases and Usage of Contexts from Cross-file Dependencies
☆19Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for RepoExec
- Graph-based method for end-to-end code completion with context awareness on repository☆47Updated 2 months ago
- [EMNLP 2023] The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation☆84Updated 3 months ago
- Predicting Program Behavior with Dynamic Dependencies Learning☆24Updated 3 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 7 months ago
- [ACL 2024] Novel reranking method to select the best solutions for code generation☆14Updated 5 months ago
- Language Model for Mainframe Modernization☆42Updated 2 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆52Updated last month
- Open-source Self-Instruction Tuning Code LLM☆168Updated last year
- [EACL 2024] ICE-Score: Instructing Large Language Models to Evaluate Code☆69Updated 5 months ago
- r2e: turn any github repository into a programming agent environment☆89Updated 3 weeks ago
- LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS☆29Updated last week
- Resources for our paper: "EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms"☆75Updated last month
- ☆39Updated 5 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆38Updated last month
- This is the official repository of the paper "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI"☆86Updated last month
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆44Updated 10 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆100Updated 2 weeks ago
- 🚀 CodeMMLU Evaluator: A framework for evaluating LM models on CodeMMLU MCQs benchmark.☆14Updated 3 weeks ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆114Updated last month
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆49Updated 8 months ago
- Baselines for all tasks from Long Code Arena benchmarks 🏟️☆23Updated 2 months ago
- Repository for paper Tools Are Instrumental for Language Agents in Complex Environments☆32Updated last month
- ☆37Updated 3 weeks ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆94Updated 2 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆110Updated 3 weeks ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆40Updated 3 months ago
- Training and Benchmarking LLMs for Code Preference.☆24Updated this week
- xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval☆74Updated 2 months ago
- Large Language Models Meet NL2Code: A Survey☆34Updated this week