SWE-Perf / SWE-PerfLinks
☆43Updated 3 weeks ago
Alternatives and similar repositories for SWE-Perf
Users that are interested in SWE-Perf are comparing it to the libraries listed below
Sorting:
- ☆12Updated 3 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆84Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆162Updated 3 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆66Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆97Updated last month
- LeetCode Training and Evaluation Dataset☆40Updated 7 months ago
- ☆33Updated 2 months ago
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆30Updated 3 months ago
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆160Updated last year
- A Comprehensive Benchmark for Software Development.☆119Updated last year
- [NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!☆135Updated this week
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆188Updated 4 months ago
- Must-read papers on Repository-level Code Generation & Issue Resolution 🔥☆207Updated last week
- Reproducing R1 for Code with Reliable Rewards☆271Updated 6 months ago
- ☆54Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆176Updated 6 months ago
- Knowledge transfer from high-resource to low-resource programming languages for Code LLMs☆16Updated 3 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆179Updated last year
- A comprehensive code domain benchmark review of LLM researches.☆151Updated 2 months ago
- NaturalCodeBench (Findings of ACL 2024)☆67Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆57Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆116Updated 6 months ago
- [ACL25] FEA-Bench: A Benchmark for Evaluating Repository-Level Code Generation for Feature Implementation☆33Updated last month
- Reproducing R1 for Code with Reliable Rewards☆12Updated 7 months ago
- Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving☆282Updated last week
- Collection of papers for scalable automated alignment.☆94Updated last year
- BrowseComp-Plus: A More Fair and Transparent Evaluation Benchmark of Deep-Research Agent☆115Updated last month
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆32Updated last month
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆133Updated last year