karpathy / ng-video-lecture
☆3,693Updated 11 months ago
Alternatives and similar repositories for ng-video-lecture:
Users that are interested in ng-video-lecture are comparing it to the libraries listed below
- An autoregressive character-level language model for making more things☆2,702Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆38,486Updated last month
- Video+code lecture on building nanoGPT from scratch☆3,782Updated 5 months ago
- Train transformer language models with reinforcement learning.☆10,609Updated this week
- A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API☆10,914Updated 5 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆20,810Updated 5 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆6,522Updated this week
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,021Updated 4 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,168Updated 7 months ago
- Inference Llama 2 in one file of pure C☆17,858Updated 5 months ago
- An unnecessarily tiny implementation of GPT-2 in NumPy.☆3,298Updated last year
- Neural Networks: Zero to Hero☆12,694Updated 4 months ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,749Updated last month
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,326Updated 6 months ago
- A framework for few-shot evaluation of language models.☆7,474Updated this week
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,799Updated 10 months ago
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆11,119Updated last month
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,567Updated last year
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,417Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,339Updated last month
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)☆1,239Updated last month
- Pure Python from-scratch zero-dependency implementation of Bitcoin for educational purposes☆1,646Updated 3 years ago
- Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)☆3,851Updated 7 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆11,197Updated this week
- Official inference library for Mistral models☆9,857Updated 2 months ago
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆13,023Updated 3 months ago
- ☆4,050Updated 7 months ago
- The n-gram Language Model☆1,363Updated 5 months ago
- Robust recipes to align language models with human and AI preferences☆4,896Updated last month