xjdr-alt / llmriLinks
look how they massacred my boy
☆63Updated 10 months ago
Alternatives and similar repositories for llmri
Users that are interested in llmri are comparing it to the libraries listed below
Sorting:
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 9 months ago
- ☆38Updated last year
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆49Updated 9 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 3 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- ☆66Updated 3 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated 5 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- An introduction to LLM Sampling☆79Updated 8 months ago
- A graph visualization of attention☆57Updated 3 months ago
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 10 months ago
- Approximating the joint distribution of language models via MCTS☆21Updated 9 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated this week
- ☆134Updated last year
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆73Updated 6 months ago
- Simple Transformer in Jax☆139Updated last year
- train entropix like a champ!☆20Updated 10 months ago
- Lego for GRPO☆28Updated 2 months ago
- ☆47Updated last year
- explore token trajectory trees on instruct and base models☆133Updated 2 months ago
- SIMD quantization kernels☆79Updated last week
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆69Updated 4 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆73Updated 5 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆95Updated last month
- smol models are fun too☆92Updated 9 months ago
- alternative way to calculating self attention☆18Updated last year
- Simple GRPO scripts and configurations.☆59Updated 6 months ago