karpathy / llm.cLinks
LLM training in simple, raw C/CUDA
☆28,325Updated 5 months ago
Alternatives and similar repositories for llm.c
Users that are interested in llm.c are comparing it to the libraries listed below
Sorting:
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,183Updated last year
- Inference Llama 2 in one file of pure C☆18,995Updated last year
- llama3 implementation one matrix multiplication at a time☆15,191Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆50,264Updated 3 weeks ago
- Tensor library for machine learning☆13,648Updated last week
- LLM101n: Let's build a Storyteller☆35,731Updated last year
- Video+code lecture on building nanoGPT from scratch☆4,584Updated last year
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆13,900Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,816Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,162Updated 3 months ago
- NanoGPT (124M) in 3 minutes☆3,911Updated last week
- LLM inference in C/C++☆90,508Updated last week
- Development repository for the Triton language and compiler☆17,730Updated this week
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,632Updated this week
- Machine Learning Engineering Open Book☆15,880Updated 2 weeks ago
- Material for gpu-mode lectures☆5,355Updated 2 weeks ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,177Updated 3 weeks ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,135Updated last year
- The official PyTorch implementation of Google's Gemma models☆5,578Updated 6 months ago
- PyTorch native post-training library☆5,608Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆23,075Updated last year
- A lightweight library for portable low-level GPU computation using WebGPU.☆3,922Updated last month
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,975Updated last week
- DSPy: The framework for programming—not prompting—language models☆30,333Updated last week
- An autoregressive character-level language model for making more things☆3,476Updated last year
- A minimal GPU design in Verilog to learn how GPUs work from the ground up☆8,937Updated last year
- High-speed Large Language Model Serving for Local Deployment☆8,420Updated 4 months ago
- Fast and memory-efficient exact attention☆20,804Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆64,235Updated this week
- Solve puzzles. Learn CUDA.☆11,790Updated last year