NEUIR / INTERVENOR
Source code for paper: INTERVENOR : Prompt the Coding Ability of Large Language Models with the Interactive Chain of Repairing
☆24Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for INTERVENOR
- RepoQA: Evaluating Long-Context Code Understanding☆99Updated last week
- Enhancing AI Software Engineering with Repository-level Code Graph☆90Updated 2 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generation☆43Updated 10 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆77Updated 4 months ago
- ☆50Updated 4 months ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆34Updated 11 months ago
- Large Language Models Meet NL2Code: A Survey☆34Updated 3 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆52Updated last month
- ☆39Updated 5 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆133Updated 2 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆48Updated 8 months ago
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)☆37Updated 4 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆120Updated 3 months ago
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆57Updated 7 months ago
- ☆117Updated last year
- ☆75Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆73Updated 9 months ago
- Open Implementations of LLM Analyses☆94Updated last month
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆111Updated 3 weeks ago
- Pre-training code for CrystalCoder 7B LLM☆53Updated 6 months ago
- Codes for the EMNLP 2023 Findings paper "Self-Polish: Enhance Reasoning in Large Language Models via Problem Refining" by Zhiheng Xi, Sen…☆26Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆56Updated 3 weeks ago
- APIBench is a benchmark for evaluating the performance of API recommendation approaches released in the paper "Revisiting, Benchmarking a…☆52Updated last year
- ☆144Updated 3 months ago
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆40Updated 3 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆124Updated 2 weeks ago
- ToolBench, an evaluation suite for LLM tool manipulation capabilities.☆143Updated 8 months ago
- ☆69Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆87Updated last year
- evol augment any dataset online☆55Updated last year