scaleapi / SWE-bench_Pro-osLinks
SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?
☆209Updated last week
Alternatives and similar repositories for SWE-bench_Pro-os
Users that are interested in SWE-bench_Pro-os are comparing it to the libraries listed below
Sorting:
- Pivotal Token Search☆131Updated 3 months ago
- GRPO training code which scales to 32xH100s for long horizon terminal/coding tasks. Base agent is now the top Qwen3 agent on Stanford's T…☆291Updated 2 months ago
- ☆116Updated 9 months ago
- Verify Precision of all Kimi K2 API Vendor☆340Updated last week
- a curated list of data for reasoning ai☆140Updated last year
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆352Updated this week
- ☆59Updated 9 months ago
- Train your own SOTA deductive reasoning model☆109Updated 8 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆242Updated 3 weeks ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆442Updated this week
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆59Updated 6 months ago
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated last month
- Routing on Random Forest (RoRF)☆218Updated last year
- ☆121Updated 5 months ago
- Benchmark that evaluates LLMs using 759 NYT Connections puzzles extended with extra trick words☆156Updated 3 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆188Updated 8 months ago
- Coding problems used in aider's polyglot benchmark☆186Updated 10 months ago
- Public repository containing METR's DVC pipeline for eval data analysis☆128Updated 7 months ago
- Run SWE-bench evaluations remotely☆42Updated 2 months ago
- LLMProc: Unix-inspired runtime that treats LLMs as processes.☆34Updated 3 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆80Updated 7 months ago
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 5 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 3 months ago
- ☆135Updated 7 months ago
- ☆453Updated last week
- ☆125Updated 6 months ago
- ☆63Updated 4 months ago
- ☆68Updated 5 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- Simple examples using Argilla tools to build AI☆56Updated 11 months ago