haizelabs / j1-microLinks
j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.
☆62Updated this week
Alternatives and similar repositories for j1-micro
Users that are interested in j1-micro are comparing it to the libraries listed below
Sorting:
- ☆57Updated last week
- A framework for optimizing DSPy programs with RL☆54Updated this week
- look how they massacred my boy☆63Updated 7 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 3 months ago
- Simple GRPO scripts and configurations.☆58Updated 3 months ago
- ☆38Updated 10 months ago
- Simple repository for training small reasoning models☆31Updated 3 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆38Updated 3 weeks ago
- Using modal.com to process FineWeb-edu data☆20Updated last month
- ☆48Updated last year
- Project code for training LLMs to write better unit tests + code☆20Updated last week
- Latent Large Language Models☆18Updated 9 months ago
- SIMD quantization kernels☆65Updated last week
- Small, simple agent task environments for training and evaluation☆18Updated 7 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 3 weeks ago
- ☆19Updated 2 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆64Updated 6 months ago
- Train your own SOTA deductive reasoning model☆92Updated 2 months ago
- ☆89Updated 8 months ago
- Lego for GRPO☆28Updated last month
- ☆125Updated 2 months ago
- A reading list of relevant papers and projects on foundation model annotation☆27Updated 3 months ago
- ☆28Updated 8 months ago
- Chat Markup Language conversation library☆55Updated last year
- accompanying material for sleep-time compute paper☆90Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆99Updated 2 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆67Updated 3 months ago
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆24Updated last month
- Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆44Updated last month
- Official repo for Learning to Reason for Long-Form Story Generation☆58Updated last month