LLM4SoftwareTesting / TestEvalLinks
☆29Updated 7 months ago
Alternatives and similar repositories for TestEval
Users that are interested in TestEval are comparing it to the libraries listed below
Sorting:
- ☆147Updated last month
- Benchmark ClassEval for class-level code generation.☆145Updated 10 months ago
- TeCo: an ML+Execution model for test completion☆30Updated last year
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆63Updated last year
- ✅SRepair: Powerful LLM-based Program Repairer with $0.029/Fixed Bug☆68Updated last year
- This repo is for our submission for ICSE 2025.☆20Updated last year
- A Systematic Literature Review on Large Language Models for Automated Program Repair☆197Updated 9 months ago
- TestGenEval A Real World Unit Test Generation and Test Completion Benchmark☆20Updated 8 months ago
- [ISSTA 2025] A Large-scale Empirical Study on Fine-tuning Large Language Models for Unit Testing☆12Updated 6 months ago
- LLM agent to automatically set up arbitrary projects and run their test suites☆45Updated last month
- ☆49Updated last year
- Artifact repository for the paper "Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code", In P…☆50Updated 4 months ago
- Large Language Models for Software Engineering☆244Updated last month
- [TOSEM 2023] A Survey of Learning-based Automated Program Repair☆69Updated last year
- A multi-lingual program repair benchmark set based on the Quixey Challenge☆124Updated 2 years ago
- Pip compatible CodeBLEU metric implementation available for linux/macos/win☆108Updated 4 months ago
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆26Updated 5 months ago
- Repo-Level Code generation papers☆200Updated last month
- Refactory: Re-factoring based Program Repair applied to Programming Assignments☆40Updated 3 years ago
- ☆142Updated 3 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆148Updated 8 months ago
- BugsInPy: Benchmarking Bugs in Python Projects☆108Updated last year
- A framework to generate unit tests using LLMs☆37Updated 3 months ago
- Source Code for Paper "Large Language Models are Few-Shot Summarizers: Multi-Intent Comment Generation via In-Context Learning"☆17Updated 2 years ago
- A Reproducible Benchmark of Recent Java Bugs☆42Updated last week
- For our ICSE23 paper "Impact of Code Language Models on Automated Program Repair" by Nan Jiang, Kevin Liu, Thibaud Lutellier, and Lin Tan☆63Updated 10 months ago
- [ISSTA'24] A Large-Scale Dataset Capable of Enhancing the Prowess of Large Language Models for Program Testing☆11Updated 7 months ago
- List of research papers of ICSE, FSE, ASE, and ISSTA since 2020.☆23Updated 4 months ago
- RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair http://arxiv.org/pdf/2312.15698☆34Updated 3 months ago
- ☆59Updated 2 years ago