commit-0 / commit0Links
Commit0: Library Generation from Scratch
☆173Updated 7 months ago
Alternatives and similar repositories for commit0
Users that are interested in commit0 are comparing it to the libraries listed below
Sorting:
- RepoQA: Evaluating Long-Context Code Understanding☆125Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 9 months ago
- [ICML '24] R2E: Turn any GitHub Repository into a Programming Agent Environment☆136Updated 7 months ago
- ☆59Updated 10 months ago
- ☆126Updated 6 months ago
- ☆136Updated 8 months ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- Storing long contexts in tiny caches with self-study☆218Updated this week
- A benchmark that challenges language models to code solutions for scientific problems☆157Updated last week
- ☆125Updated 9 months ago
- ☆107Updated last week
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆322Updated this week
- Evaluating LLMs with fewer examples☆169Updated last year
- Small, simple agent task environments for training and evaluation☆19Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆99Updated 4 months ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year
- Long context evaluation for large language models☆224Updated 9 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆153Updated last year
- ☆68Updated 6 months ago
- ☆128Updated 7 months ago
- rl from zero pretrain, can it be done? yes.☆282Updated 2 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆105Updated 11 months ago
- Evaluation of LLMs on latest math competitions☆200Updated last month
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆112Updated 2 months ago
- Super basic implementation (gist-like) of RLMs with REPL environments.☆278Updated last month
- Can Language Models Solve Olympiad Programming?☆122Updated 10 months ago
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆84Updated 8 months ago