ExpertiseModel / MuTAPLinks
MutAP: A prompt_based learning technique to automatically generate test cases with Large Language Model
☆43Updated 4 months ago
Alternatives and similar repositories for MuTAP
Users that are interested in MuTAP are comparing it to the libraries listed below
Sorting:
- This repo is for our submission for ICSE 2025.☆20Updated last year
- Replication package of the ICSE2025 paper titled "Leveraging Large Language Models for Enhancing the Understandability of Generated Unit …☆8Updated 4 months ago
- LLM agent to automatically set up arbitrary projects and run their test suites☆44Updated last week
- Dianshu-Liao / AAA-Code-Generation-Framework-for-Code-Repository-Local-Aware-Global-Aware-Third-Party-Aware☆19Updated last year
- ☆423Updated last year
- Replication package of a paper "Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction"☆25Updated last year
- [TOSEM 2023] A Survey of Learning-based Automated Program Repair☆71Updated last year
- ☆48Updated last year
- Benchmark ClassEval for class-level code generation.☆144Updated 8 months ago
- methods2test is a supervised dataset consisting of Test Cases and their corresponding Focal Methods from a set of Java software repositor…☆159Updated last year
- Large Language Models for Software Engineering☆236Updated this week
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆61Updated 11 months ago
- BugsInPy: Benchmarking Bugs in Python Projects☆105Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆191Updated 3 months ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆86Updated 3 months ago
- ☆28Updated 2 years ago
- ☆141Updated last month
- RepairAgent is an autonomous LLM-based agent for software repair.☆50Updated 3 weeks ago
- Repository for the paper "Large Language Model-Based Agents for Software Engineering: A Survey". Keep updating.☆478Updated 4 months ago
- The official Python SDK for Codellm-Devkit☆106Updated 2 weeks ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆146Updated 6 months ago
- Dataflow-guided retrieval augmentation for repository-level code completion, ACL 2024 (main)☆26Updated 3 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆74Updated last year
- A Systematic Literature Review on Large Language Models for Automated Program Repair☆194Updated 7 months ago
- Reinforcement Learning for Repository-Level Code Completion☆35Updated 10 months ago
- A framework to generate unit tests using LLMs☆37Updated 2 months ago
- [2023 TDSC] Pre-trained Model-based Automated Software Vulnerability Repair: How Far are We?☆26Updated 2 years ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆73Updated 10 months ago
- CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context☆17Updated 8 months ago
- Evaluation code of ASE24 accepted paper "On the Evaluation of LLM in Unit Test Generation"☆11Updated 7 months ago