The simplest, fastest repository for training/finetuning medium-sized GPTs.
☆55,030Nov 12, 2025Updated 4 months ago
Alternatives and similar repositories for nanoGPT
Users that are interested in nanoGPT are comparing it to the libraries listed below
Sorting:
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆23,950Aug 15, 2024Updated last year
- Inference Llama 2 in one file of pure C☆19,262Aug 6, 2024Updated last year
- LLM inference in C/C++☆98,098Updated this week
- LLM training in simple, raw C/CUDA☆29,216Jun 26, 2025Updated 8 months ago
- The agent engineering platform☆130,454Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- Inference code for Llama models☆59,221Jan 26, 2025Updated last year
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆15,046Aug 8, 2024Updated last year
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆158,060Updated this week
- LlamaIndex is the leading document agent and OCR platform☆47,753Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,428Jun 2, 2025Updated 9 months ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,258Jul 17, 2024Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,869Updated this week
- OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamical…☆37,433Aug 17, 2024Updated last year
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,375Jul 1, 2024Updated last year
- Fast and memory-efficient exact attention☆22,832Updated this week
- ☆4,574Jan 31, 2024Updated 2 years ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,228Mar 6, 2026Updated 2 weeks ago
- Making large AI models cheaper, faster and more accessible☆41,362Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆17,599Feb 8, 2026Updated last month
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆54,096Updated this week
- GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.☆77,226May 27, 2025Updated 9 months ago
- Instruct-tune LLaMA on consumer hardware☆18,961Jul 29, 2024Updated last year
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆31,592Updated this week
- LLM101n: Let's build a Storyteller☆36,559Aug 1, 2024Updated last year
- AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus o…☆182,560Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,082Jul 1, 2025Updated 8 months ago
- Examples and guides for using the OpenAI API☆72,193Mar 14, 2026Updated last week
- Train transformer language models with reinforcement learning.☆17,697Updated this week
- DSPy: The framework for programming—not prompting—language models☆32,853Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,809Updated this week
- Implement a ChatGPT-like LLM in PyTorch from scratch, step by step☆88,603Mar 7, 2026Updated 2 weeks ago
- Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.☆165,557Updated this week
- Robust Speech Recognition via Large-Scale Weak Supervision☆96,288Dec 15, 2025Updated 3 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,578Aug 12, 2024Updated last year
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆68,728Updated this week
- Video+code lecture on building nanoGPT from scratch☆4,839Aug 13, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,419Mar 5, 2026Updated 2 weeks ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,046Jan 23, 2026Updated last month