JetBrains-Research / EnvBenchLinks
[DL4C @ ICLR 2025] A Benchmark for Automated Environment Setup
☆23Updated last week
Alternatives and similar repositories for EnvBench
Users that are interested in EnvBench are comparing it to the libraries listed below
Sorting:
- Pip compatible CodeBLEU metric implementation available for linux/macos/win☆113Updated 5 months ago
- Benchmark ClassEval for class-level code generation.☆145Updated 11 months ago
- The First International Workshop on Large Language Model for Code 2024 (Co-Located with ICSE 2024)☆17Updated 11 months ago
- RepairAgent is an autonomous LLM-based agent for software repair.☆64Updated 2 months ago
- Large Language Models for Software Engineering☆245Updated 2 months ago
- A Systematic Literature Review on Large Language Models for Automated Program Repair☆204Updated 10 months ago
- ☆31Updated 8 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆63Updated last year
- Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code", In P…☆49Updated 5 months ago
- ☆23Updated 11 months ago
- LLM agent to automatically set up arbitrary projects and run their test suites☆46Updated 2 months ago
- Repo-Level Code generation papers☆211Updated 2 months ago
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆26Updated 6 months ago
- A Reproducible Benchmark of Recent Java Bugs☆42Updated last month
- ✅SRepair: Powerful LLM-based Program Repairer with $0.029/Fixed Bug☆71Updated last year
- Dianshu-Liao / AAA-Code-Generation-Framework-for-Code-Repository-Local-Aware-Global-Aware-Third-Party-Aware☆20Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆81Updated last year
- List of research papers of ICSE, FSE, ASE, and ISSTA since 2020.☆27Updated last week
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆149Updated 9 months ago
- [TOSEM 2023] A Survey of Learning-based Automated Program Repair☆69Updated last year
- For our ICSE23 paper "Impact of Code Language Models on Automated Program Repair" by Nan Jiang, Kevin Liu, Thibaud Lutellier, and Lin Tan☆63Updated 11 months ago
- ☆153Updated 2 months ago
- [ISSTA 2025] A Large-scale Empirical Study on Fine-tuning Large Language Models for Unit Testing☆13Updated 7 months ago
- [ICSE'25] Aligning the Objective of LLM-based Program Repair☆19Updated 6 months ago
- ☆26Updated last year
- A multi-lingual program repair benchmark set based on the Quixey Challenge☆127Updated 3 years ago
- Source Code for "Exploring and Unleashing the Power of Large Language Models in Automated Code Translation"☆23Updated 2 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆76Updated last year
- BugsInPy: Benchmarking Bugs in Python Projects☆111Updated last year
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories☆32Updated last year