xjdr-alt / simple_transformerLinks
Simple Transformer in Jax
☆139Updated last year
Alternatives and similar repositories for simple_transformer
Users that are interested in simple_transformer are comparing it to the libraries listed below
Sorting:
- smol models are fun too☆92Updated 9 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- look how they massacred my boy☆63Updated 10 months ago
- SIMD quantization kernels☆79Updated 2 weeks ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 5 months ago
- A really tiny autograd engine☆95Updated 2 months ago
- rl from zero pretrain, can it be done? yes.☆250Updated last week
- Just large language models. Hackable, with as little abstraction as possible. Done for my own purposes, feel free to rip.☆44Updated last year
- ☆98Updated 2 weeks ago
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- Plotting (entropy, varentropy) for small LMs☆98Updated 3 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆69Updated 4 months ago
- ☆27Updated last year
- A graph visualization of attention☆57Updated 3 months ago
- ☆118Updated 8 months ago
- An introduction to LLM Sampling☆79Updated 8 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 8 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 9 months ago
- ☆38Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated this week
- A puzzle to learn about prompting☆132Updated 2 years ago
- PageRank for LLMs☆44Updated 4 months ago
- ComplexTensor: Machine Learning By Bridging Classical and Quantum Computation☆77Updated 9 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ☆138Updated 4 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆180Updated last month
- Solve puzzles. Learn CUDA.☆64Updated last year
- Gradient descent is cool and all, but what if we could delete it?☆104Updated 2 weeks ago
- explore token trajectory trees on instruct and base models☆133Updated 2 months ago