naklecha / llama3-from-scratchLinks
llama3 implementation one matrix multiplication at a time
☆15,162Updated last year
Alternatives and similar repositories for llama3-from-scratch
Users that are interested in llama3-from-scratch are comparing it to the libraries listed below
Sorting:
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,975Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆12,817Updated last week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,763Updated last year
- LLM training in simple, raw C/CUDA☆27,769Updated 3 months ago
- Train transformer language models with reinforcement learning.☆15,739Updated this week
- Implement a ChatGPT-like LLM in PyTorch from scratch, step by step☆74,452Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,112Updated last month
- PyTorch native post-training library☆5,523Updated this week
- Inference Llama 2 in one file of pure C☆18,801Updated last year
- 3D Visualization of an GPT-style LLM☆5,053Updated last year
- Video+code lecture on building nanoGPT from scratch☆4,408Updated last year
- Modeling, training, eval, and inference code for OLMo☆6,019Updated last month
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆59,732Updated this week
- A modular graph-based Retrieval-Augmented Generation (RAG) system☆28,517Updated this week
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆17,925Updated this week
- ☆4,096Updated last year
- Machine Learning Engineering Open Book☆15,386Updated last week
- Examples in the MLX framework☆7,897Updated last month
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.☆46,548Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,743Updated this week
- Fast and memory-efficient exact attention☆19,778Updated last week
- MLX: An array framework for Apple silicon☆22,395Updated this week
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆22,669Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆44,823Updated 9 months ago
- The official Meta Llama 3 GitHub site☆28,998Updated 8 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆23,663Updated last year
- Go ahead and axolotl questions☆10,551Updated this week
- The n-gram Language Model☆1,447Updated last year
- LLM101n: Let's build a Storyteller☆34,503Updated last year
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆4,912Updated last week