srush / Transformer-PuzzlesLinks
Puzzles for exploring transformers
☆370Updated 2 years ago
Alternatives and similar repositories for Transformer-Puzzles
Users that are interested in Transformer-Puzzles are comparing it to the libraries listed below
Sorting:
- ☆453Updated 11 months ago
- What would you do with 1000 H100s...☆1,100Updated last year
- ☆281Updated last year
- Annotated version of the Mamba paper☆489Updated last year
- A puzzle to learn about prompting☆135Updated 2 years ago
- An interactive exploration of Transformer programming.☆269Updated last year
- ☆534Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆658Updated this week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆813Updated last month
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆396Updated last week
- Building blocks for foundation models.☆552Updated last year
- Solve puzzles. Learn CUDA.☆63Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- For optimization algorithm research and development.☆536Updated this week
- 🧱 Modula software package☆237Updated last month
- seqax = sequence modeling + JAX☆167Updated last month
- ☆307Updated last year
- Everything you want to know about Google Cloud TPU☆545Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆225Updated last month
- ☆168Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆576Updated last month
- Efficient optimizers☆261Updated last month
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆605Updated this week
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆141Updated last year
- ☆497Updated last month
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆189Updated last year
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- Resources from the EleutherAI Math Reading Group☆54Updated 6 months ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆293Updated 2 months ago