karpathy / llm.cLinks
LLM training in simple, raw C/CUDA
☆28,720Updated 7 months ago
Alternatives and similar repositories for llm.c
Users that are interested in llm.c are comparing it to the libraries listed below
Sorting:
- Inference Llama 2 in one file of pure C☆19,137Updated last year
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,287Updated last year
- Tensor library for machine learning☆13,907Updated this week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆52,437Updated 2 months ago
- Development repository for the Triton language and compiler☆18,319Updated this week
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆31,232Updated this week
- llama3 implementation one matrix multiplication at a time☆15,240Updated last year
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆14,478Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,886Updated last year
- LLM101n: Let's build a Storyteller☆36,254Updated last year
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,714Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,126Updated last week
- MLX: An array framework for Apple silicon☆23,707Updated this week
- Material for gpu-mode lectures☆5,640Updated last month
- Video+code lecture on building nanoGPT from scratch☆4,707Updated last year
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆23,341Updated last year
- Fast and memory-efficient exact attention☆21,957Updated this week
- High-speed Large Language Model Serving for Local Deployment☆8,611Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆69,007Updated this week
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆18,171Updated 3 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,830Updated last year
- A minimal GPU design in Verilog to learn how GPUs work from the ground up☆11,126Updated last year
- ☆4,113Updated last year
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆17,127Updated 3 months ago
- LLM inference in C/C++☆93,866Updated this week
- NanoGPT (124M) in 2 minutes☆4,515Updated this week
- The official PyTorch implementation of Google's Gemma models☆5,599Updated 8 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆22,800Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,180Updated 5 months ago
- Inference code for Llama models☆59,088Updated last year