ExpertiseModel / MuTAPLinks
MutAP: A prompt_based learning technique to automatically generate test cases with Large Language Model
☆44Updated 5 months ago
Alternatives and similar repositories for MuTAP
Users that are interested in MuTAP are comparing it to the libraries listed below
Sorting:
- This repo is for our submission for ICSE 2025.☆20Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆198Updated 4 months ago
- Replication package of the ICSE2025 paper titled "Leveraging Large Language Models for Enhancing the Understandability of Generated Unit …☆8Updated 5 months ago
- RepairAgent is an autonomous LLM-based agent for software repair.☆59Updated 2 weeks ago
- methods2test is a supervised dataset consisting of Test Cases and their corresponding Focal Methods from a set of Java software repositor…☆160Updated last year
- ☆49Updated last year
- ☆432Updated last year
- Evaluation code of ASE24 accepted paper "On the Evaluation of LLM in Unit Test Generation"☆11Updated 8 months ago
- Dianshu-Liao / AAA-Code-Generation-Framework-for-Code-Repository-Local-Aware-Global-Aware-Third-Party-Aware☆19Updated last year
- LLM agent to automatically set up arbitrary projects and run their test suites☆45Updated last month
- Replication package of a paper "Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction"☆25Updated last year
- The official Python SDK for Codellm-Devkit☆108Updated 2 weeks ago
- ☆145Updated 2 weeks ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆88Updated 4 months ago
- Repository for the paper "Large Language Model-Based Agents for Software Engineering: A Survey". Keep updating.☆487Updated 4 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆73Updated 11 months ago
- Large Language Models for Software Engineering☆241Updated 2 weeks ago
- [TOSEM 2023] A Survey of Learning-based Automated Program Repair☆70Updated last year
- Benchmark ClassEval for class-level code generation.☆145Updated 9 months ago
- ✅SRepair: Powerful LLM-based Program Repairer with $0.029/Fixed Bug☆67Updated last year
- A framework to generate unit tests using LLMs☆37Updated 2 months ago
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆26Updated 4 months ago
- A Systematic Literature Review on Large Language Models for Automated Program Repair☆198Updated 8 months ago
- BugsInPy: Benchmarking Bugs in Python Projects☆105Updated last year
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆148Updated 7 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆76Updated last year
- ☆28Updated 2 years ago
- ☆12Updated 8 months ago
- ☆27Updated 6 months ago
- ☆142Updated 2 months ago