haizelabs / j1-microLinks
j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.
☆96Updated last month
Alternatives and similar repositories for j1-micro
Users that are interested in j1-micro are comparing it to the libraries listed below
Sorting:
- A framework for optimizing DSPy programs with RL☆154Updated last week
- ☆68Updated 3 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 7 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆48Updated 4 months ago
- Train your own SOTA deductive reasoning model☆106Updated 6 months ago
- ☆39Updated last year
- Storing long contexts in tiny caches with self-study☆179Updated last week
- look how they massacred my boy☆64Updated 10 months ago
- ☆133Updated 5 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 5 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆68Updated 4 months ago
- Simple GRPO scripts and configurations.☆59Updated 7 months ago
- Training-Ready RL Environments + Evals☆77Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 7 months ago
- Project code for training LLMs to write better unit tests + code☆21Updated 3 months ago
- Simple repository for training small reasoning models☆40Updated 7 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆33Updated 4 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 10 months ago
- Pivotal Token Search☆122Updated last month
- ☆79Updated last week
- ⚖️ Awesome LLM Judges ⚖️☆127Updated 4 months ago
- ☆49Updated 7 months ago
- An introduction to LLM Sampling☆80Updated 8 months ago
- rl from zero pretrain, can it be done? yes.☆265Updated 3 weeks ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- lossily compress representation vectors using product quantization☆59Updated 4 months ago
- A reading list of relevant papers and projects on foundation model annotation☆27Updated 6 months ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆88Updated 11 months ago
- Code for our paper PAPILLON: PrivAcy Preservation from Internet-based and Local Language MOdel ENsembles☆54Updated 4 months ago