nano-R1 / resourcesLinks
Compiling useful links, papers, benchmarks, ideas, etc.
☆45Updated 4 months ago
Alternatives and similar repositories for resources
Users that are interested in resources are comparing it to the libraries listed below
Sorting:
- ☆94Updated this week
- rl from zero pretrain, can it be done? we'll see.☆66Updated last week
- Simple Transformer in Jax☆138Updated last year
- Simple repository for training small reasoning models☆32Updated 5 months ago
- ☆130Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆103Updated 4 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆68Updated 3 months ago
- look how they massacred my boy☆63Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 6 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- An introduction to LLM Sampling☆79Updated 7 months ago
- Open source interpretability artefacts for R1.☆157Updated 3 months ago
- Exploring Applications of GRPO☆244Updated 3 weeks ago
- smol models are fun too☆93Updated 8 months ago
- Decentralized RL Training at Scale☆400Updated this week
- train entropix like a champ!☆19Updated 9 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 9 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆45Updated 2 months ago
- Train your own SOTA deductive reasoning model☆101Updated 4 months ago
- SIMD quantization kernels☆76Updated this week
- Plotting (entropy, varentropy) for small LMs☆98Updated 2 months ago
- ☆134Updated 4 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆568Updated this week
- Long context evaluation for large language models☆220Updated 4 months ago
- ☆64Updated 2 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆94Updated 2 weeks ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆82Updated 3 months ago
- Storing long contexts in tiny caches with self-study☆117Updated this week
- A reading list of relevant papers and projects on foundation model annotation☆27Updated 5 months ago
- code for training & evaluating Contextual Document Embedding models☆195Updated 2 months ago