mariushobbhahn / SWEBench-verified-miniLinks
☆24Updated last year
Alternatives and similar repositories for SWEBench-verified-mini
Users that are interested in SWEBench-verified-mini are comparing it to the libraries listed below
Sorting:
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆225Updated 6 months ago
- ☆128Updated 3 months ago
- A benchmark that challenges language models to code solutions for scientific problems☆166Updated this week
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆164Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆124Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆114Updated 5 months ago
- ☆41Updated last year
- A library for efficient patching and automatic circuit discovery.☆84Updated 2 weeks ago
- Replicating O1 inference-time scaling laws☆90Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆168Updated 5 months ago
- Code for the TMLR 2023 paper "PPOCoder: Execution-based Code Generation using Deep Reinforcement Learning"☆118Updated 2 years ago
- ☆129Updated 7 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆226Updated last month
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆235Updated last year
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆184Updated last year
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆237Updated this week
- Can Language Models Solve Olympiad Programming?☆124Updated last year
- ☆44Updated 8 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆138Updated 8 months ago
- A Comprehensive Benchmark for Software Development.☆124Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- ☆33Updated 4 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆62Updated last year
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆178Updated last year
- A simple unified framework for evaluating LLMs☆258Updated 9 months ago
- ☆32Updated this week
- ☆88Updated 2 months ago