SakanaAI / ShinkaEvolveLinks
ShinkaEvolve: Towards Open-Ended and Sample-Efficient Program Evolution
☆261Updated this week
Alternatives and similar repositories for ShinkaEvolve
Users that are interested in ShinkaEvolve are comparing it to the libraries listed below
Sorting:
- ☆97Updated 2 months ago
- Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple r…☆225Updated last week
- SoTA Approach for ARC-AGI 2☆86Updated last week
- The official repository of ALE-Bench☆114Updated this week
- Implementation of SOAR☆43Updated last week
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆321Updated 11 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆343Updated 3 months ago
- The code repository of the paper: Competition and Attraction Improve Model Fusion☆155Updated last month
- ☆100Updated last month
- A Tree Search Library with Flexible API for LLM Inference-Time Scaling☆472Updated last month
- ☆123Updated 9 months ago
- Code for Discovering Preference Optimization Algorithms with and for Large Language Models☆190Updated last year
- ☆80Updated last month
- Source code for the collaborative reasoner research project at Meta FAIR.☆103Updated 5 months ago
- ☆60Updated 2 months ago
- Storing long contexts in tiny caches with self-study☆190Updated 2 weeks ago
- ☆187Updated last month
- Code for☆27Updated 9 months ago
- accompanying material for sleep-time compute paper☆111Updated 4 months ago
- Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning☆230Updated 7 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- Train your own SOTA deductive reasoning model☆106Updated 6 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 4 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆99Updated last month
- Open source interpretability artefacts for R1.☆159Updated 5 months ago
- ☆142Updated 2 weeks ago
- smolLM with Entropix sampler on pytorch☆150Updated 10 months ago
- rl from zero pretrain, can it be done? yes.☆269Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆183Updated 6 months ago
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆24Updated last week