CLAIRE-Labo / EvoTuneLinks
Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.
☆116Updated this week
Alternatives and similar repositories for EvoTune
Users that are interested in EvoTune are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last year
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 4 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆229Updated this week
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆72Updated 5 months ago
- Flash Attention Triton kernel with support for second-order derivatives☆106Updated last week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆102Updated 10 months ago
- open source alpha evolve☆66Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆168Updated 4 months ago
- RLP: Reinforcement as a Pretraining Objective☆192Updated 3 weeks ago
- Esoteric Language Models☆103Updated 3 weeks ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆44Updated last week
- ☆68Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆86Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆99Updated last year
- Library for text-to-text regression, applicable to any input string representation and allows pretraining and fine-tuning over multiple r…☆277Updated last week
- Simple repository for training small reasoning models☆44Updated 8 months ago
- Implementation of Infini-Transformer in Pytorch☆113Updated 9 months ago
- Implementation of the new SOTA for model based RL, from the paper "Improving Transformer World Models for Data-Efficient RL", in Pytorch☆141Updated 5 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆72Updated last week
- Mixture of A Million Experts☆48Updated last year
- Normalized Transformer (nGPT)☆192Updated 11 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 7 months ago
- DeMo: Decoupled Momentum Optimization☆194Updated 10 months ago
- ☆149Updated 2 months ago
- Official implementation of the paper: "ZClip: Adaptive Spike Mitigation for LLM Pre-Training".☆136Updated 2 weeks ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆103Updated 3 weeks ago
- 📄Small Batch Size Training for Language Models☆63Updated 3 weeks ago
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 5 months ago
- Implementation of a transformer for reinforcement learning using `x-transformers`☆69Updated last month