seal-rg / recurrent-pretrainingLinks
Pretraining and inference code for a large-scale depth-recurrent language model
☆816Updated last month
Alternatives and similar repositories for recurrent-pretraining
Users that are interested in recurrent-pretraining are comparing it to the libraries listed below
Sorting:
- Training Large Language Model to Reason in a Continuous Latent Space☆1,249Updated 2 weeks ago
- Dream 7B, a large diffusion language model☆915Updated this week
- procedural reasoning datasets☆1,060Updated last week
- Recipes to scale inference-time compute of open models☆1,112Updated 3 months ago
- [COLM 2025] LIMO: Less is More for Reasoning☆1,006Updated 3 weeks ago
- ☆621Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆344Updated 8 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆327Updated 9 months ago
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,136Updated 6 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆438Updated this week
- Build your own visual reasoning model☆405Updated this week
- Understanding R1-Zero-Like Training: A Critical Perspective☆1,068Updated last month
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆520Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆845Updated 5 months ago
- ☆1,033Updated 8 months ago
- MLGym A New Framework and Benchmark for Advancing AI Research Agents☆546Updated 2 weeks ago
- Decentralized RL Training at Scale☆472Updated this week
- Muon is Scalable for LLM Training☆1,281Updated 3 weeks ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆904Updated 3 months ago
- Code for Quiet-STaR☆738Updated last year
- System 2 Reasoning Link Collection☆852Updated 5 months ago
- Continuous Thought Machines, because thought takes time and reasoning is a process.☆1,277Updated last month
- ☆955Updated 7 months ago
- Tina: Tiny Reasoning Models via LoRA☆275Updated last week
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆569Updated 6 months ago
- Code for the paper: "Learning to Reason without External Rewards"☆347Updated last month
- A bibliography and survey of the papers surrounding o1☆1,207Updated 9 months ago
- Code for BLT research paper☆1,966Updated 3 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"☆2,763Updated this week
- ReasonFlux Series - A family of LLM post-training algorithms focusing on data selection, reinforcement learning, and inference scaling☆481Updated 3 weeks ago