karpathy / nanoGPTLinks
The simplest, fastest repository for training/finetuning medium-sized GPTs.
☆43,960Updated 8 months ago
Alternatives and similar repositories for nanoGPT
Users that are interested in nanoGPT are comparing it to the libraries listed below
Sorting:
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆22,521Updated last year
- Inference Llama 2 in one file of pure C☆18,715Updated last year
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆12,661Updated last year
- LLM training in simple, raw C/CUDA☆27,510Updated 2 months ago
- ☆4,175Updated last year
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆15,741Updated this week
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,141Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆13,935Updated this week
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,887Updated last year
- Inference code for Llama models☆58,685Updated 7 months ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,049Updated 3 months ago
- An autoregressive character-level language model for making more things☆3,284Updated last year
- Instruct-tune LLaMA on consumer hardware☆18,952Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,447Updated this week
- LLM inference in C/C++☆85,819Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,642Updated last year
- Tensor library for machine learning☆13,086Updated last week
- Fast and memory-efficient exact attention☆19,275Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆56,914Updated this week
- Train transformer language models with reinforcement learning.☆15,330Updated this week
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,362Updated 10 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,079Updated 2 months ago
- A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)☆10,036Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,428Updated last year
- llama3 implementation one matrix multiplication at a time☆15,123Updated last year
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, Qwen3, Llama 4, DeepSeek-R1, Gemma 3, TTS 2x faster with 70% less…☆44,900Updated this week
- 🦜🔗 Build context-aware reasoning applications 🦜🔗☆114,468Updated this week
- An unnecessarily tiny implementation of GPT-2 in NumPy.☆3,403Updated 2 years ago
- JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf☆24,326Updated last month
- Video+code lecture on building nanoGPT from scratch☆4,331Updated last year