ise-uiuc / Repilot
Repilot, a patch generation tool introduced in the ESEC/FSE'23 paper "Copiloting the Copilots: Fusing Large Language Models with Completion Engines for Automated Program Repair"
☆127Updated last year
Alternatives and similar repositories for Repilot:
Users that are interested in Repilot are comparing it to the libraries listed below
- EvoEval: Evolving Coding Benchmarks via LLM☆66Updated 10 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆34Updated this week
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆41Updated 6 months ago
- r2e: turn any github repository into a programming agent environment☆100Updated 2 weeks ago
- RepoQA: Evaluating Long-Context Code Understanding☆102Updated 3 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆132Updated last month
- Static Analysis meets Large Language Models☆48Updated 9 months ago
- ✅SRepair: Powerful LLM-based Program Repairer with $0.029/Fixed Bug☆58Updated 9 months ago
- RepairAgent is an autonomous LLM-based agent for software repair.☆29Updated this week
- ☆121Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆144Updated 6 months ago
- 🌟 replication package for 📜 From Commit Message Generation to History-Aware Commit Message Completion, ASE 2023☆58Updated last year
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆62Updated 7 months ago
- Open-source Self-Instruction Tuning Code LLM☆170Updated last year
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆92Updated 3 months ago
- Large Language Models for Software Engineering☆204Updated this week
- Extract and combine multiple source code views using tree-sitter☆122Updated 2 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆130Updated 6 months ago
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆131Updated last month
- For our ICSE23 paper "Impact of Code Language Models on Automated Program Repair" by Nan Jiang, Kevin Liu, Thibaud Lutellier, and Lin Tan☆59Updated 4 months ago
- Data and evaluation scripts for "CodePlan: Repository-level Coding using LLMs and Planning", FSE 2024☆62Updated 5 months ago
- Code and Data artifact for NeurIPS 2023 paper - "Monitor-Guided Decoding of Code LMs with Static Analysis of Repository Context". `multis…☆238Updated 6 months ago
- ☆83Updated 7 months ago
- [TOSEM 2023] A Survey of Learning-based Automated Program Repair☆68Updated 9 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆295Updated 3 months ago
- 🐙 OctoPack: Instruction Tuning Code Large Language Models☆451Updated 2 weeks ago
- Count Tokens of Code (forked from gocloc)☆43Updated 6 months ago
- [FORGE 2025] Graph-based method for end-to-end code completion with context awareness on repository☆57Updated 5 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆208Updated 9 months ago
- Fuzzing Automatic Differentiation in Deep-Learning Libraries (ICSE'23)☆22Updated 11 months ago