CLAIRE-Labo / EvoTuneLinks
Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.
☆120Updated last month
Alternatives and similar repositories for EvoTune
Users that are interested in EvoTune are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated last month
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆277Updated last month
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆57Updated 6 months ago
- ☆91Updated last year
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆82Updated 7 months ago
- RLP: Reinforcement as a Pretraining Objective☆218Updated 2 months ago
- Simple repository for training small reasoning models☆47Updated 10 months ago
- Implementation of Infini-Transformer in Pytorch☆113Updated 11 months ago
- 📄Small Batch Size Training for Language Models☆69Updated 2 months ago
- ☆82Updated last year
- AIRA-dojo: a framework for developing and evaluating AI research agents☆121Updated last month
- Official repo of paper LM2☆46Updated 10 months ago
- ☆162Updated 4 months ago
- Mixture of A Million Experts☆52Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Official implementation of Regularized Policy Gradient (RPG) (https://arxiv.org/abs/2505.17508)☆60Updated 2 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Normalized Transformer (nGPT)☆194Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆45Updated 2 months ago
- Exploration into the Scaling Value Iteration Networks paper, from Schmidhuber's group☆37Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 9 months ago
- Self contained pytorch implementation of a sinkhorn based router, for mixture of experts or otherwise☆39Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆85Updated 3 months ago
- Esoteric Language Models☆108Updated last month
- Implementation of the new SOTA for model based RL, from the paper "Improving Transformer World Models for Data-Efficient RL", in Pytorch☆148Updated 7 months ago
- Flash Attention Triton kernel with support for second-order derivatives☆125Updated last week
- Implementation of SOAR☆46Updated 3 months ago
- H-Net Dynamic Hierarchical Architecture☆80Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated 2 years ago