karpathy / micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
☆10,914Updated 5 months ago
Alternatives and similar repositories for micrograd:
Users that are interested in micrograd are comparing it to the libraries listed below
- An autoregressive character-level language model for making more things☆2,702Updated 7 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆9,326Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆38,486Updated last month
- You like pytorch? You like micrograd? You love tinygrad! ❤️☆27,548Updated this week
- Neural Networks: Zero to Hero☆12,694Updated 4 months ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆20,810Updated 5 months ago
- Inference Llama 2 in one file of pure C☆17,858Updated 5 months ago
- LLM training in simple, raw C/CUDA☆25,047Updated 3 months ago
- ☆3,693Updated 11 months ago
- Official inference library for Mistral models☆9,857Updated 2 months ago
- Tensor library for machine learning☆11,541Updated this week
- LLM101n: Let's build a Storyteller☆31,021Updated 5 months ago
- Flax is a neural network library for JAX that is designed for flexibility.☆6,272Updated this week
- Video+code lecture on building nanoGPT from scratch☆3,782Updated 5 months ago
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more☆30,985Updated this week
- Pure Python from-scratch zero-dependency implementation of Bitcoin for educational purposes☆1,646Updated 3 years ago
- llama3 implementation one matrix multiplication at a time☆14,030Updated 7 months ago
- Development repository for the Triton language and compiler☆14,042Updated this week
- The n-gram Language Model☆1,363Updated 5 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆11,197Updated this week
- ☆4,050Updated 7 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,168Updated 7 months ago
- Ongoing research training transformer models at scale☆11,109Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,349Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆33,809Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆16,978Updated this week
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)☆1,239Updated last month
- Fast and memory-efficient exact attention☆15,064Updated this week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.☆13,023Updated 3 months ago
- A library for efficient similarity search and clustering of dense vectors.☆32,387Updated this week