commit-0 / commit0Links
Commit0: Library Generation from Scratch
☆168Updated 5 months ago
Alternatives and similar repositories for commit0
Users that are interested in commit0 are comparing it to the libraries listed below
Sorting:
- RepoQA: Evaluating Long-Context Code Understanding☆117Updated 11 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆184Updated 7 months ago
- Train your own SOTA deductive reasoning model☆107Updated 7 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆132Updated 5 months ago
- ☆115Updated 4 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆151Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- Long context evaluation for large language models☆222Updated 7 months ago
- ☆135Updated 6 months ago
- ☆57Updated 8 months ago
- Evaluating LLMs with fewer examples☆161Updated last year
- A benchmark that challenges language models to code solutions for scientific problems☆143Updated last week
- ☆103Updated this week
- Storing long contexts in tiny caches with self-study☆194Updated 3 weeks ago
- ☆123Updated 7 months ago
- SWE Arena☆34Updated 3 months ago
- Evaluation of LLMs on latest math competitions☆171Updated 3 weeks ago
- ☆118Updated 5 months ago
- ☆101Updated 9 months ago
- A simple unified framework for evaluating LLMs☆250Updated 5 months ago
- Replicating O1 inference-time scaling laws☆90Updated 10 months ago
- Small, simple agent task environments for training and evaluation☆18Updated 11 months ago
- ☆109Updated 5 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 5 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆414Updated last week
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 6 months ago
- ☆186Updated last year
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆53Updated 5 months ago
- ☆80Updated this week
- Functional Benchmarks and the Reasoning Gap☆89Updated last year