theoxo / self-repairLinks
[ICLR 2024]: Is Self-Repair a Silver Bullet for Code Generation?
☆15Updated last year
Alternatives and similar repositories for self-repair
Users that are interested in self-repair are comparing it to the libraries listed below
Sorting:
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆84Updated last year
- Pip compatible CodeBLEU metric implementation available for linux/macos/win☆122Updated 7 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆66Updated last year
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆38Updated 8 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆117Updated last year
- Simultaneous evaluation on both functionality and security of LLM-generated code.☆27Updated 2 months ago
- Benchmark ClassEval for class-level code generation.☆145Updated last year
- ☆11Updated last year
- ☆15Updated 11 months ago
- A Manually-Annotated Code Generation Benchmark Aligned with Real-World Code Repositories☆36Updated last year
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆55Updated last week
- A comprehensive code domain benchmark review of LLM researches.☆151Updated 2 months ago
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆48Updated last week
- A collection of practical code generation tasks and tests in open source projects. Complementary to HumanEval by OpenAI.☆155Updated 10 months ago
- Code for ACL (main) paper "JumpCoder: Go Beyond Autoregressive Coder via Online Modification"☆27Updated last year
- Code and Results of the Paper: On the Resilience of Multi-Agent Systems with Malicious Agents☆40Updated 9 months ago
- 🔮Reasoning for Safer Code Generation; 🥇Winner Solution of Amazon Nova AI Challenge 2025☆31Updated 2 months ago
- EvoEval: Evolving Coding Benchmarks via LLM☆80Updated last year
- Code for the AAAI 2023 paper "CodeAttack: Code-based Adversarial Attacks for Pre-Trained Programming Language Models☆33Updated 2 years ago
- ☆48Updated last year
- ☆36Updated 2 years ago
- TDD-Bench-Verified is a new benchmark for generating test cases for test-driven development (TDD)☆25Updated 2 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆157Updated last year
- Repo-Level Code generation papers☆223Updated 4 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆172Updated last year
- ☆16Updated last year
- ☆46Updated 3 years ago
- Large Language Models for Software Engineering☆255Updated 4 months ago
- Repoformer: Selective Retrieval for Repository-Level Code Completion (ICML 2024)☆61Updated 5 months ago
- A certifier for bias in LLMs☆24Updated 7 months ago