The simplest, fastest repository for training/finetuning medium-sized GPTs.
☆54,071Nov 12, 2025Updated 3 months ago
Alternatives and similar repositories for nanoGPT
Users that are interested in nanoGPT are comparing it to the libraries listed below
Sorting:
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆23,746Aug 15, 2024Updated last year
- Inference Llama 2 in one file of pure C☆19,213Aug 6, 2024Updated last year
- LLM inference in C/C++☆96,322Updated this week
- 🦜🔗 The platform for reliable agents.☆127,809Feb 28, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,883Updated this week
- LLM training in simple, raw C/CUDA☆28,993Jun 26, 2025Updated 8 months ago
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆157,462Updated this week
- LlamaIndex is the leading document agent and OCR platform☆47,374Updated this week
- Inference code for Llama models☆59,183Jan 26, 2025Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,426Jun 2, 2025Updated 9 months ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,267Jul 17, 2024Updated last year
- OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamical…☆37,444Aug 17, 2024Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,706Feb 27, 2026Updated last week
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆14,842Aug 8, 2024Updated last year
- Making large AI models cheaper, faster and more accessible☆41,364Updated this week
- GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.☆77,171May 27, 2025Updated 9 months ago
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆53,029Updated this week
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆31,471Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,347Jul 1, 2024Updated last year
- Fast and memory-efficient exact attention☆22,460Updated this week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,206Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,972Jul 29, 2024Updated last year
- AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus o…☆182,190Updated this week
- Examples and guides for using the OpenAI API☆71,832Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆17,473Feb 8, 2026Updated 3 weeks ago
- DSPy: The framework for programming—not prompting—language models☆32,519Updated this week
- Robust Speech Recognition via Large-Scale Weak Supervision☆95,527Dec 15, 2025Updated 2 months ago
- Implement a ChatGPT-like LLM in PyTorch from scratch, step by step☆87,151Updated this week
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.☆164,248Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,500Aug 12, 2024Updated last year
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,082Jul 1, 2025Updated 8 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,717Updated this week
- LLM101n: Let's build a Storyteller☆36,390Aug 1, 2024Updated last year
- Universal LLM Deployment Engine with ML Compilation☆22,082Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,030Jan 23, 2026Updated last month
- Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!☆41,921Updated this week
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆18,234Updated this week
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆67,966Updated this week