SakanaAI / self-adaptive-llmsLinks
A Self-adaptation Frameworkπ that adapts LLMs for unseen tasks in real-time!
β1,151Updated 8 months ago
Alternatives and similar repositories for self-adaptive-llms
Users that are interested in self-adaptive-llms are comparing it to the libraries listed below
Sorting:
- Code for BLT research paperβ1,987Updated 4 months ago
- Continuous Thought Machines, because thought takes time and reasoning is a process.β1,309Updated 2 months ago
- Pretraining and inference code for a large-scale depth-recurrent language modelβ829Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ912Updated 5 months ago
- Training Large Language Model to Reason in a Continuous Latent Spaceβ1,272Updated last month
- Large Concept Models: Language modeling in a sentence representation spaceβ2,290Updated 8 months ago
- Self-Adapting Language Modelsβ800Updated 2 months ago
- Unofficial implementation of Titans, SOTA memory for transformers, in Pytorchβ1,465Updated 4 months ago
- Recipes to scale inference-time compute of open modelsβ1,109Updated 4 months ago
- Dream 7B, a large diffusion language modelβ991Updated last week
- Hypernetworks that adapt LLMs for specific benchmark tasks using only textual task description as the inputβ873Updated 3 months ago
- Official PyTorch implementation for "Large Language Diffusion Models"β2,971Updated 2 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.β322Updated 11 months ago
- β1,273Updated 3 weeks ago
- [ICLR 2025] Automated Design of Agentic Systemsβ1,428Updated 8 months ago
- OLMoE: Open Mixture-of-Experts Language Modelsβ875Updated 2 weeks ago
- [NeurIPS 2025] Atom of Thoughts for Markov LLM Test-Time Scalingβ588Updated 3 months ago
- prime is a framework for efficient, globally distributed training of AI models over the internet.β827Updated 4 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β343Updated 9 months ago
- Muon is Scalable for LLM Trainingβ1,318Updated 2 months ago
- System 2 Reasoning Link Collectionβ853Updated 6 months ago
- An Open Large Reasoning Model for Real-World Solutionsβ1,522Updated 4 months ago
- An Open Source Toolkit For LLM Distillationβ732Updated 2 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewardsβ1,161Updated last week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineeringβ985Updated last week
- β1,034Updated 9 months ago
- Synthetic data curation for post-training and structured data extractionβ1,511Updated 2 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse β¦β701Updated last week
- SONAR, a new multilingual and multimodal fixed-size sentence embedding space, with a full suite of speech and text encoders and decoders.β825Updated 2 months ago
- OpenAI Frontier Evalsβ903Updated last week