THUDM / SWE-DevLinks
[ACL25' Findings] SWE-Dev is an SWE agent with a scalable test case construction pipeline.
☆53Updated last month
Alternatives and similar repositories for SWE-Dev
Users that are interested in SWE-Dev are comparing it to the libraries listed below
Sorting:
- RepoQA: Evaluating Long-Context Code Understanding☆115Updated 10 months ago
- ☆112Updated 3 months ago
- ☆108Updated 2 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆153Updated last month
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆104Updated last month
- StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback☆68Updated last year
- ☆52Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 7 months ago
- ☆77Updated last week
- ☆103Updated 8 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆152Updated 10 months ago
- ☆70Updated this week
- A Comprehensive Benchmark for Software Development.☆113Updated last year
- Code for the paper: CodeTree: Agent-guided Tree Search for Code Generation with Large Language Models☆27Updated 5 months ago
- 🚀 SWE-bench Goes Live!☆113Updated last month
- Training and Benchmarking LLMs for Code Preference.☆35Updated 9 months ago
- Run SWE-bench evaluations remotely☆40Updated 2 weeks ago
- Moatless Testbeds allows you to create isolated testbed environments in a Kubernetes cluster where you can apply code changes through git…☆15Updated 4 months ago
- This is the official implementation for paper "PENCIL: Long Thoughts with Short Memory".☆61Updated 3 months ago
- Code for paper "Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System"☆61Updated 9 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆92Updated 3 months ago
- ☆41Updated last year
- [ACL 2025] Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆104Updated 2 months ago
- ☆46Updated 2 months ago
- e☆39Updated 4 months ago
- ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆109Updated this week
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆62Updated 10 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆179Updated 5 months ago
- ☆28Updated 2 weeks ago
- CodeElo: Benchmarking Competition-level Code Generation of LLMs with Human-comparable Elo Ratings☆52Updated 6 months ago