karpathy / ng-video-lecture
☆3,944Updated last year
Alternatives and similar repositories for ng-video-lecture:
Users that are interested in ng-video-lecture are comparing it to the libraries listed below
- An autoregressive character-level language model for making more things☆3,042Updated 11 months ago
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆11,745Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆40,976Updated 4 months ago
- An unnecessarily tiny implementation of GPT-2 in NumPy.☆3,350Updated 2 years ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,611Updated 10 months ago
- Inference Llama 2 in one file of pure C☆18,321Updated 9 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆21,817Updated 8 months ago
- Neural Networks: Zero to Hero☆13,668Updated 8 months ago
- Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM☆7,794Updated last week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,052Updated 8 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,633Updated last year
- Tensor library for machine learning☆12,445Updated this week
- Pure Python from-scratch zero-dependency implementation of Bitcoin for educational purposes☆1,723Updated 3 years ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,482Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,872Updated 9 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,463Updated last year
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,940Updated 3 weeks ago
- Video+code lecture on building nanoGPT from scratch☆4,077Updated 8 months ago
- LLM training in simple, raw C/CUDA☆26,483Updated 7 months ago
- Instruct-tune LLaMA on consumer hardware☆18,902Updated 9 months ago
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)☆1,252Updated 4 months ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆29,972Updated 9 months ago
- The n-gram Language Model☆1,419Updated 9 months ago
- A collection of libraries to optimise AI model performances☆8,370Updated 9 months ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,312Updated 6 months ago
- llama3 implementation one matrix multiplication at a time☆14,909Updated 11 months ago
- ☆2,800Updated this week
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,711Updated 4 months ago
- Notebooks and various random fun☆1,096Updated 2 years ago
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆14,358Updated last month