commit-0 / commit0Links
Commit0: Library Generation from Scratch
☆177Updated 8 months ago
Alternatives and similar repositories for commit0
Users that are interested in commit0 are comparing it to the libraries listed below
Sorting:
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 11 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆128Updated last year
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆139Updated 9 months ago
- ☆132Updated 8 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- ☆137Updated 10 months ago
- ☆59Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 11 months ago
- Evaluating LLMs with fewer examples☆169Updated last year
- ☆131Updated 8 months ago
- A benchmark that challenges language models to code solutions for scientific problems☆169Updated last week
- ☆105Updated last year
- A simple unified framework for evaluating LLMs☆261Updated 9 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆128Updated 3 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆361Updated this week
- Long context evaluation for large language models☆226Updated 11 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆152Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Utilities for efficient fine-tuning, inference and evaluation of code generation models☆21Updated 2 years ago
- LILO: Library Induction with Language Observations☆90Updated last year
- Storing long contexts in tiny caches with self-study☆233Updated 2 months ago
- [NeurIPS'24] SelfCodeAlign: Self-Alignment for Code Generation☆323Updated 11 months ago
- SWE Arena☆35Updated 7 months ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆246Updated last week
- Can Language Models Solve Olympiad Programming?☆123Updated last year
- ☆74Updated last year
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆625Updated 6 months ago
- ☆123Updated 11 months ago
- CRUXEval: Code Reasoning, Understanding, and Execution Evaluation☆164Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆238Updated last year