karpathy / build-nanogptLinks
Video+code lecture on building nanoGPT from scratch
☆4,607Updated last year
Alternatives and similar repositories for build-nanogpt
Users that are interested in build-nanogpt are comparing it to the libraries listed below
Sorting:
- An autoregressive character-level language model for making more things☆3,508Updated last year
- NanoGPT (124M) in 3 minutes☆3,947Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,217Updated last year
- llama3 implementation one matrix multiplication at a time☆15,199Updated last year
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,712Updated 8 months ago
- ☆4,406Updated last year
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆4,380Updated last month
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,165Updated 3 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,923Updated 3 months ago
- A PyTorch native platform for training generative AI models☆4,847Updated this week
- LLM training in simple, raw C/CUDA☆28,414Updated 5 months ago
- Modeling, training, eval, and inference code for OLMo☆6,220Updated 3 weeks ago
- PyTorch native post-training library☆5,619Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,034Updated this week
- ☆4,109Updated last year
- Material for gpu-mode lectures☆5,432Updated last week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆50,929Updated last month
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆14,078Updated last year
- LLM101n: Let's build a Storyteller☆35,878Updated last year
- Inference Llama 2 in one file of pure C☆19,032Updated last year
- Curated list of datasets and tools for post-training.☆4,083Updated last month
- The official PyTorch implementation of Google's Gemma models☆5,586Updated 6 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,835Updated last year
- DataComp for Language Models☆1,401Updated 3 months ago
- Minimal reproduction of DeepSeek R1-Zero☆12,486Updated 7 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆23,160Updated last year
- It is said that, Ilya Sutskever gave John Carmack this reading list of ~ 30 research papers on deep learning.☆1,001Updated last year
- Llama from scratch, or How to implement a paper without crying☆581Updated last year
- Puzzles for learning Triton☆2,170Updated last year
- A library for mechanistic interpretability of GPT-style language models☆2,892Updated last week