CLAIRE-Labo / EvoTuneLinks
Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.
☆130Updated 2 months ago
Alternatives and similar repositories for EvoTune
Users that are interested in EvoTune are comparing it to the libraries listed below
Sorting:
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆292Updated 2 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆135Updated 3 months ago
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆59Updated 8 months ago
- [ICLR 2026] Official PyTorch Implementation of RLP: Reinforcement as a Pretraining Objective☆231Updated 2 weeks ago
- 📄Small Batch Size Training for Language Models☆80Updated 4 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆84Updated 8 months ago
- ☆91Updated last year
- Normalized Transformer (nGPT)☆198Updated last year
- Universal Reasoning Model☆122Updated 3 weeks ago
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- Esoteric Language Models☆111Updated this week
- ☆167Updated 5 months ago
- Official repo of paper LM2☆46Updated 11 months ago
- Implementation of the new SOTA for model based RL, from the paper "Improving Transformer World Models for Data-Efficient RL", in Pytorch☆153Updated 9 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆182Updated 7 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆82Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple r…☆313Updated this week
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆128Updated 4 months ago
- Extending the Context of Pretrained LLMs by Dropping Their Positional Embedding☆200Updated 3 weeks ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆46Updated 3 months ago
- Supporting code for the blog post on modular manifolds.☆115Updated 4 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆84Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated 3 weeks ago
- Implementation of SOAR☆49Updated 4 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- [ICLR 2026] RPG: KL-Regularized Policy Gradient (https://arxiv.org/abs/2505.17508)☆65Updated last week
- ☆59Updated 2 months ago