karpathy / nanoGPTLinks
The simplest, fastest repository for training/finetuning medium-sized GPTs.
☆51,804Updated last month
Alternatives and similar repositories for nanoGPT
Users that are interested in nanoGPT are comparing it to the libraries listed below
Sorting:
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆23,250Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,351Updated 7 months ago
- ☆4,459Updated last year
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,269Updated last year
- Inference Llama 2 in one file of pure C☆19,089Updated last year
- Instruct-tune LLaMA on consumer hardware☆18,989Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,087Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,807Updated last year
- Inference code for Llama models☆59,035Updated 11 months ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,424Updated this week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆13,142Updated last year
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,879Updated 3 months ago
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,093Updated 6 months ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,358Updated last month
- A guidance language for controlling large language models.☆21,130Updated this week
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,273Updated 3 weeks ago
- llama3 implementation one matrix multiplication at a time☆15,231Updated last year
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆16,965Updated 3 months ago
- LlamaIndex is the leading framework for building LLM-powered agents over your data.☆46,229Updated this week
- Fast and memory-efficient exact attention☆21,516Updated this week
- 🦜🔗 The platform for reliable agents.☆123,595Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,859Updated last year
- Making large AI models cheaper, faster and more accessible☆41,311Updated 2 weeks ago
- Train transformer language models with reinforcement learning.☆16,915Updated this week
- LLM training in simple, raw C/CUDA☆28,559Updated 6 months ago
- Tensor library for machine learning☆13,784Updated last week
- A latent text-to-image diffusion model☆72,153Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,881Updated this week
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆18,141Updated 2 months ago
- Universal LLM Deployment Engine with ML Compilation☆21,833Updated last week