SinatrasC / entropix-smollmLinks
smolLM with Entropix sampler on pytorch
☆150Updated 11 months ago
Alternatives and similar repositories for entropix-smollm
Users that are interested in entropix-smollm are comparing it to the libraries listed below
Sorting:
- Plotting (entropy, varentropy) for small LMs☆98Updated 5 months ago
- look how they massacred my boy☆63Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆145Updated 8 months ago
- smol models are fun too☆93Updated 11 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- Modify Entropy Based Sampling to work with Mac Silicon via MLX☆49Updated 11 months ago
- ☆136Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 11 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 9 months ago
- An introduction to LLM Sampling☆79Updated 10 months ago
- Simple Transformer in Jax☆139Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 2 months ago
- ☆40Updated last year
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- ☆124Updated 10 months ago
- Train your own SOTA deductive reasoning model☆108Updated 7 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆181Updated 2 weeks ago
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week
- ☆102Updated 9 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 6 months ago
- A graph visualization of attention☆57Updated 5 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- DeMo: Decoupled Momentum Optimization☆194Updated 10 months ago
- ☆68Updated 5 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 11 months ago
- Fast parallel LLM inference for MLX☆223Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated 2 years ago
- ☆123Updated last year