scaleapi / SWE-bench_Pro-osLinks
SWE-Bench Pro: Can AI Agents Solve Long-Horizon Software Engineering Tasks?
☆107Updated this week
Alternatives and similar repositories for SWE-bench_Pro-os
Users that are interested in SWE-bench_Pro-os are comparing it to the libraries listed below
Sorting:
- Pivotal Token Search☆125Updated 2 months ago
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆55Updated 4 months ago
- a curated list of data for reasoning ai☆137Updated last year
- ☆230Updated 6 months ago
- GRPO training code which scales to 32xH100s for long horizon terminal/coding tasks. Base agent is now the top Qwen3 agent on Stanford's T…☆259Updated last month
- A framework for optimizing DSPy programs with RL☆182Updated this week
- ☆116Updated 7 months ago
- Benchmark that evaluates LLMs using 759 NYT Connections puzzles extended with extra trick words☆146Updated last week
- Routing on Random Forest (RoRF)☆206Updated last year
- ☆104Updated 3 months ago
- ☆56Updated 7 months ago
- Applying the ideas of Deepseek R1 to computer use☆216Updated 7 months ago
- LLMProc: Unix-inspired runtime that treats LLMs as processes.☆33Updated 2 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆321Updated this week
- Coding problems used in aider's polyglot benchmark☆180Updated 9 months ago
- Alice in Wonderland code base for experiments and raw experiments data☆131Updated last week
- Train your own SOTA deductive reasoning model☆106Updated 6 months ago
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆55Updated 6 months ago
- ☆421Updated last month
- Live-bending a foundation model’s output at neural network level.☆264Updated 5 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆89Updated 11 months ago
- Clue inspired puzzles for testing LLM deduction abilities☆43Updated 6 months ago
- ☆68Updated 4 months ago
- ☆231Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆98Updated 2 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- Public repository containing METR's DVC pipeline for eval data analysis☆110Updated 5 months ago
- ☆32Updated last month
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated this week
- Simple & Scalable Pretraining for Neural Architecture Research☆294Updated last month