google-deepmind / alphaevolve_resultsLinks
☆253Updated 6 months ago
Alternatives and similar repositories for alphaevolve_results
Users that are interested in alphaevolve_results are comparing it to the libraries listed below
Sorting:
- Open-source release accompanying Gao et al. 2025☆486Updated last month
- ShinkaEvolve: Towards Open-Ended and Sample-Efficient Program Evolution☆773Updated this week
- ☆482Updated 5 months ago
- Evaluation of LLMs on latest math competitions☆211Updated 2 weeks ago
- ☆600Updated 7 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆355Updated 6 months ago
- ☆178Updated 3 weeks ago
- Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple r…☆305Updated 3 weeks ago
- Testing baseline LLMs performance across various models☆332Updated last week
- ☆395Updated 3 weeks ago
- ☆113Updated 3 months ago
- ☆164Updated 4 months ago
- Technical report of Kimina-Prover Preview.☆350Updated 6 months ago
- Open source interpretability artefacts for R1.☆165Updated 8 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆137Updated 4 months ago
- accompanying material for sleep-time compute paper☆118Updated 8 months ago
- ☆213Updated 4 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆341Updated 2 months ago
- Research code artifacts for Code World Model (CWM) including inference tools, reproducibility, and documentation.☆792Updated 2 weeks ago
- Repository for Zochi's Research☆297Updated last month
- ☆224Updated 9 months ago
- RLP: Reinforcement as a Pretraining Objective☆222Updated 3 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆284Updated last month
- Code for the paper: "Learning to Reason without External Rewards"☆385Updated 6 months ago
- Training API and CLI☆305Updated 3 weeks ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆228Updated 2 months ago
- Tina: Tiny Reasoning Models via LoRA☆312Updated 3 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆340Updated 3 weeks ago
- ☆277Updated 8 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…