karpathy / minGPTLinks
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
☆23,075Updated last year
Alternatives and similar repositories for minGPT
Users that are interested in minGPT are comparing it to the libraries listed below
Sorting:
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆50,560Updated 3 weeks ago
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆13,943Updated last year
- Unsupervised text tokenizer for Neural Network-based text generation.☆11,474Updated 2 weeks ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,348Updated this week
- An autoregressive character-level language model for making more things☆3,492Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,778Updated last year
- Inference Llama 2 in one file of pure C☆18,995Updated last year
- Train transformer language models with reinforcement learning.☆16,552Updated this week
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.☆30,546Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,108Updated last year
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,195Updated last year
- Tensor library for machine learning☆13,648Updated last week
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆30,788Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,087Updated 5 months ago
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆16,717Updated 2 months ago
- Fast and memory-efficient exact attention☆20,904Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆40,890Updated last week
- Development repository for the Triton language and compiler☆17,730Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,010Updated 11 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,215Updated this week
- 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production☆10,279Updated this week
- An unnecessarily tiny implementation of GPT-2 in NumPy.☆3,423Updated 2 years ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,203Updated 3 weeks ago
- Video+code lecture on building nanoGPT from scratch☆4,595Updated last year
- Neural Networks: Zero to Hero☆18,990Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,706Updated last month
- Ongoing research training transformer models at scale☆14,389Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,866Updated 5 months ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,236Updated last year
- Google Research☆36,832Updated this week