karpathy / nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
☆37,411Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for nanoGPT
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆20,199Updated 3 months ago
- Inference Llama 2 in one file of pure C☆17,476Updated 3 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆10,734Updated last week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆12,427Updated last month
- Inference code for Llama models☆56,450Updated 3 months ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆29,561Updated 4 months ago
- LlamaIndex is a data framework for your LLM applications☆36,820Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆5,994Updated 2 months ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆36,993Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,059Updated 5 months ago
- Fast and memory-efficient exact attention☆14,279Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆35,508Updated this week
- Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom dataset…☆15,222Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆16,471Updated this week
- Instruct-tune LLaMA on consumer hardware☆18,653Updated 3 months ago
- OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamical…☆37,071Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆30,423Updated this week
- Train transformer language models with reinforcement learning.☆10,086Updated this week
- LLM inference in C/C++☆68,097Updated this week
- Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory☆18,263Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆20,286Updated 3 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆12,672Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆20,194Updated last week
- 🦜🔗 Build context-aware reasoning applications☆95,070Updated this week
- Large Language Model Text Generation Inference☆9,122Updated this week
- StableLM: Stability AI Language Models☆15,828Updated 7 months ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆10,776Updated 3 months ago
- the AI-native open-source embedding database☆15,448Updated this week
- DSPy: The framework for programming—not prompting—language models☆18,885Updated this week
- Inference code for CodeLlama models☆16,044Updated 3 months ago