thinking-machines-lab / tinkerLinks
Training API
β202Updated 3 weeks ago
Alternatives and similar repositories for tinker
Users that are interested in tinker are comparing it to the libraries listed below
Sorting:
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.β270Updated last week
- A MAD laboratory to improve AI architecture designs π§ͺβ132Updated 10 months ago
- Understand and test language model architectures on synthetic tasks.β237Updated last month
- Simple & Scalable Pretraining for Neural Architecture Researchβ298Updated last week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β171Updated 4 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"β336Updated 11 months ago
- β106Updated 2 weeks ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'β232Updated 3 months ago
- Open-source framework for the research and development of foundation models.β574Updated last week
- β877Updated this week
- PyTorch-native post-training at scaleβ479Updated last week
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).β296Updated 2 weeks ago
- Open source interpretability artefacts for R1.β163Updated 6 months ago
- Long context evaluation for large language modelsβ224Updated 8 months ago
- β231Updated 4 months ago
- EvaByte: Efficient Byte-level Language Models at Scaleβ110Updated 6 months ago
- A Gym for Agentic LLMsβ347Updated last week
- Storing long contexts in tiny caches with self-studyβ210Updated 3 weeks ago
- Meta Agents Research Environments is a comprehensive platform designed to evaluate AI agents in dynamic, realistic scenarios. Unlike statβ¦β338Updated last week
- Implementation of π₯₯ Coconut, Chain of Continuous Thought, in Pytorchβ179Updated 4 months ago
- rl from zero pretrain, can it be done? yes.β280Updated last month
- β108Updated last year
- PyTorch building blocks for the OLMo ecosystemβ314Updated this week
- Training-Ready RL Environments + Evalsβ164Updated this week
- β154Updated 2 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β353Updated 10 months ago
- β114Updated 3 weeks ago
- Code for the paper: "Learning to Reason without External Rewards"β370Updated 3 months ago
- Physics of Language Models, Part 4β255Updated 3 months ago
- Dion optimizer algorithmβ379Updated last week