mariushobbhahn / SWEBench-verified-miniLinks
☆22Updated 10 months ago
Alternatives and similar repositories for SWEBench-verified-mini
Users that are interested in SWEBench-verified-mini are comparing it to the libraries listed below
Sorting:
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆223Updated 3 weeks ago
- ✨ RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems - ICLR 2024☆178Updated last year
- A benchmark that challenges language models to code solutions for scientific problems☆153Updated last week
- [COLM 2025] Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆185Updated 4 months ago
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆230Updated last year
- InstructCoder: Instruction Tuning Large Language Models for Code Editing | Oral ACL-2024 srw☆64Updated last year
- CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion (NeurIPS 2023)☆160Updated 3 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆123Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.☆214Updated last year
- ☆102Updated last year
- [NeurIPS '25] Challenging Software Optimization Tasks for Evaluating SWE-Agents☆55Updated 2 weeks ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆157Updated last year
- A distributed, extensible, secure solution for evaluating machine generated code with unit tests in multiple programming languages.☆57Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆121Updated last year
- Training and Benchmarking LLMs for Code Preference.☆37Updated last year
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆134Updated 6 months ago
- Can Language Models Solve Olympiad Programming?☆120Updated 10 months ago
- A library for efficient patching and automatic circuit discovery.☆79Updated 3 months ago
- ☆117Updated last month
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆225Updated last week
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- A simple unified framework for evaluating LLMs☆254Updated 7 months ago
- Evaluation of LLMs on latest math competitions☆178Updated 3 weeks ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆94Updated 2 years ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆244Updated last year
- A Comprehensive Benchmark for Software Development.☆118Updated last year
- [NeurIPS 2024] Evaluation harness for SWT-Bench, a benchmark for evaluating LLM repository-level test-generation☆61Updated 2 months ago
- [NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!☆132Updated last week
- EvoEval: Evolving Coding Benchmarks via LLM☆80Updated last year
- Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions☆48Updated 2 months ago