haizelabs / j1-microLinks
j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.
☆98Updated 2 months ago
Alternatives and similar repositories for j1-micro
Users that are interested in j1-micro are comparing it to the libraries listed below
Sorting:
- ☆68Updated 4 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 8 months ago
- ☆40Updated last year
- look how they massacred my boy☆63Updated 11 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆52Updated 4 months ago
- Train your own SOTA deductive reasoning model☆107Updated 6 months ago
- A framework for optimizing DSPy programs with RL☆185Updated last week
- Simple GRPO scripts and configurations.☆59Updated 7 months ago
- Storing long contexts in tiny caches with self-study☆192Updated 2 weeks ago
- ☆133Updated 6 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 8 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆77Updated 6 months ago
- Project code for training LLMs to write better unit tests + code☆21Updated 4 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆24Updated 2 weeks ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 5 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 11 months ago
- ⚖️ Awesome LLM Judges ⚖️☆128Updated 5 months ago
- ☆24Updated 4 months ago
- ☆81Updated last week
- ☆54Updated 10 months ago
- ☆49Updated 7 months ago
- rl from zero pretrain, can it be done? yes.☆274Updated this week
- An introduction to LLM Sampling☆79Updated 9 months ago
- ☆57Updated 8 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 7 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- Small, simple agent task environments for training and evaluation☆18Updated 11 months ago
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆116Updated last month
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆94Updated this week