SWE-Perf / SWE-PerfLinks
☆44Updated last month
Alternatives and similar repositories for SWE-Perf
Users that are interested in SWE-Perf are comparing it to the libraries listed below
Sorting:
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- SWE-Swiss: A Multi-Task Fine-Tuning and RL Recipe for High-Performance Issue Resolution☆101Updated 2 months ago
- The repository for paper "DebugBench: "Evaluating Debugging Capability of Large Language Models".☆84Updated last year
- ☆12Updated 4 months ago
- ☆33Updated 3 months ago
- LeetCode Training and Evaluation Dataset☆43Updated 7 months ago
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆164Updated 3 months ago
- A Comprehensive Benchmark for Software Development.☆122Updated last year
- Reproducing R1 for Code with Reliable Rewards☆277Updated 7 months ago
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆208Updated 5 months ago
- An Evolving Code Generation Benchmark Aligned with Real-world Code Repositories☆66Updated last year
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆30Updated 4 months ago
- Baselines for all tasks from Long Code Arena benchmarks 🏟️☆38Updated 8 months ago
- Must-read papers on Repository-level Code Generation & Issue Resolution 🔥☆219Updated this week
- A comprehensive code domain benchmark review of LLM researches.☆169Updated 2 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆161Updated last year
- Reproducing R1 for Code with Reliable Rewards☆12Updated 8 months ago
- [NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!☆142Updated this week
- ☆54Updated last year
- CodeRAG-Bench: Can Retrieval Augment Code Generation?☆162Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆180Updated 6 months ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆182Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆118Updated 7 months ago
- [LREC-COLING'24] HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization☆40Updated 9 months ago
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆61Updated last year
- [EMNLP 2024] CodeJudge: Evaluating Code Generation with Large Language Models☆52Updated last month
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- Official repository for our paper "FullStack Bench: Evaluating LLMs as Full Stack Coders"☆107Updated 7 months ago
- Collection of papers for scalable automated alignment.☆94Updated last year
- NaturalCodeBench (Findings of ACL 2024)☆68Updated last year