bytedance / Repo2RunLinks
Repo2Run is an LLM-based agent that automates environment configuration by generating error-free Dockerfiles for Python repositories.
☆52Updated last month
Alternatives and similar repositories for Repo2Run
Users that are interested in Repo2Run are comparing it to the libraries listed below
Sorting:
- Enhancing AI Software Engineering with Repository-level Code Graph☆215Updated 5 months ago
- ☆112Updated 3 months ago
- Run SWE-bench evaluations remotely☆42Updated last month
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆407Updated this week
- The official repo for the code and data of paper SMART☆36Updated 7 months ago
- ☆100Updated last year
- ☆117Updated 4 months ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆94Updated 6 months ago
- Data Synthesis for Deep Research Based on Semi-Structured Data☆158Updated this week
- Agentless Lite: RAG-based SWE-Bench software engineering scaffold☆43Updated 5 months ago
- [NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!☆122Updated this week
- [ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.☆55Updated 2 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆163Updated 2 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆165Updated last year
- Official code for the paper "CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules"☆46Updated 8 months ago
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 11 months ago
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆105Updated 3 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 10 months ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆112Updated 11 months ago
- The evaluation benchmark on MCP servers☆211Updated 3 weeks ago
- Advancing LLM with Diverse Coding Capabilities☆78Updated last year
- ☆35Updated 4 months ago
- Official Code Repository for the paper "Distilling LLM Agent into Small Models with Retrieval and Code Tools"☆154Updated last month
- ☆274Updated 2 months ago
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆56Updated last week
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆245Updated 4 months ago
- MapCoder: Multi-Agent Code Generation for Competitive Problem Solving☆165Updated 7 months ago
- Agent computer interface for AI software engineer.☆111Updated last week
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆213Updated last week
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆85Updated this week