google-deepmind / alphaevolve_resultsLinks
☆213Updated last month
Alternatives and similar repositories for alphaevolve_results
Users that are interested in alphaevolve_results are comparing it to the libraries listed below
Sorting:
- ☆402Updated 2 months ago
- Evaluation of LLMs on latest math competitions☆155Updated 2 weeks ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆324Updated last month
- ☆455Updated 2 weeks ago
- Open source interpretability artefacts for R1.☆157Updated 3 months ago
- ☆262Updated 3 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆321Updated 8 months ago
- Repository for Zochi's Research☆248Updated 3 weeks ago
- Code for ExploreTom☆84Updated last month
- ☆172Updated 3 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆226Updated 2 weeks ago
- Testing baseline LLMs performance across various models☆291Updated last week
- GPQA: A Graduate-Level Google-Proof Q&A Benchmark☆378Updated 10 months ago
- accompanying material for sleep-time compute paper☆99Updated 3 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 7 months ago
- open source interpretability platform 🧠☆311Updated this week
- ☆88Updated last month
- Decentralized RL Training at Scale☆400Updated this week
- Code for the paper: "Learning to Reason without External Rewards"☆344Updated 3 weeks ago
- An open source implementation of LFMs from Liquid AI: Liquid Foundation Models☆103Updated 10 months ago
- ☆194Updated 4 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆538Updated 2 weeks ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym [ICML 2025]☆516Updated last week
- ☆212Updated 5 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆219Updated last month
- Repository for the paper Stream of Search: Learning to Search in Language☆149Updated 6 months ago
- Technical report of Kimina-Prover Preview.☆320Updated 3 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆318Updated 9 months ago
- Self-Adapting Language Models☆743Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆175Updated 4 months ago