vgel / logitloomLinks
explore token trajectory trees on instruct and base models
☆132Updated 3 months ago
Alternatives and similar repositories for logitloom
Users that are interested in logitloom are comparing it to the libraries listed below
Sorting:
- look how they massacred my boy☆64Updated 11 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆73Updated 7 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆33Updated 6 months ago
- Approximating the joint distribution of language models via MCTS☆21Updated 10 months ago
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆49Updated 10 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 3 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 10 months ago
- A graph visualization of attention☆57Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 10 months ago
- ☆39Updated last year
- Storing long contexts in tiny caches with self-study☆179Updated last week
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆83Updated 3 weeks ago
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- Simple Transformer in Jax☆139Updated last year
- PageRank for LLMs☆50Updated last week
- smol models are fun too☆93Updated 10 months ago
- A framework for optimizing DSPy programs with RL☆172Updated this week
- ⚖️ Awesome LLM Judges ⚖️☆127Updated 4 months ago
- ☆68Updated 3 months ago
- rot13 version of claudd code☆40Updated 6 months ago
- LLMProc: Unix-inspired runtime that treats LLMs as processes.☆33Updated 2 months ago
- Sphynx Hallucination Induction☆53Updated 7 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆96Updated 5 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆97Updated last month
- An introduction to LLM Sampling☆79Updated 9 months ago
- The Prime Intellect CLI provides a powerful command-line interface for managing GPU resources across various providers☆84Updated this week
- Project code for training LLMs to write better unit tests + code☆21Updated 3 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- Claude Deep Research config for Claude Code.☆212Updated 6 months ago