smolorg / smoltropixLinks
MLX port for xjdr's entropix sampler (mimics jax implementation)
☆64Updated 7 months ago
Alternatives and similar repositories for smoltropix
Users that are interested in smoltropix are comparing it to the libraries listed below
Sorting:
- look how they massacred my boy☆63Updated 7 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆76Updated last month
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆50Updated 6 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 7 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆68Updated 3 months ago
- ☆59Updated last week
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 7 months ago
- Plotting (entropy, varentropy) for small LMs☆97Updated 2 weeks ago
- ☆38Updated 10 months ago
- A graph visualization of attention☆55Updated 2 weeks ago
- 🦾💻🌐 distributed training & serverless inference at scale on RunPod☆17Updated last year
- Approximating the joint distribution of language models via MCTS☆21Updated 7 months ago
- ☆114Updated 5 months ago
- Lego for GRPO☆28Updated last week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆100Updated 2 months ago
- Cerule - A Tiny Mighty Vision Model☆66Updated 9 months ago
- train entropix like a champ!☆20Updated 7 months ago
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆59Updated last year
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆74Updated last week
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 3 months ago
- SIMD quantization kernels☆70Updated this week
- smol models are fun too☆92Updated 6 months ago
- rl from zero pretrain, can it be done? we'll see.☆24Updated this week
- Useful resources for LLM-based Diarization and Transcription.☆55Updated 7 months ago
- ☆14Updated last month
- ☆75Updated 2 weeks ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆90Updated 4 months ago
- ☆28Updated 6 months ago
- The next evolution of Agents☆48Updated last week