KellerJordan / modded-nanogptLinks
NanoGPT (124M) in 3 minutes
☆3,165Updated 2 months ago
Alternatives and similar repositories for modded-nanogpt
Users that are interested in modded-nanogpt are comparing it to the libraries listed below
Sorting:
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,836Updated last month
- nanoGPT style version of Llama 3.1☆1,427Updated last year
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,596Updated 5 months ago
- A PyTorch native platform for training generative AI models☆4,504Updated this week
- Code for BLT research paper☆1,987Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,246Updated last month
- The n-gram Language Model☆1,447Updated last year
- Muon is an optimizer for hidden layers in neural networks☆1,803Updated 2 months ago
- Puzzles for learning Triton☆2,022Updated 10 months ago
- Environments for LLM Reinforcement Learning☆3,254Updated this week
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆4,085Updated 3 weeks ago
- Schedule-Free Optimization in PyTorch☆2,215Updated 4 months ago
- Video+code lecture on building nanoGPT from scratch☆4,408Updated last year
- Official repository for our work on micro-budget training of large-scale diffusion models.☆1,512Updated 8 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆915Updated 5 months ago
- The Autograd Engine☆637Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆816Updated 2 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆3,420Updated 10 months ago
- AllenAI's post-training codebase☆3,222Updated this week
- The Multilayer Perceptron Language Model☆568Updated last year
- Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.☆4,723Updated 2 months ago
- Tile primitives for speedy kernels☆2,767Updated 2 weeks ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆3,431Updated this week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,416Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,384Updated last week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,660Updated last week
- Efficient Triton Kernels for LLM Training☆5,714Updated this week
- System 2 Reasoning Link Collection☆853Updated 6 months ago
- UNet diffusion model in pure CUDA☆647Updated last year
- PyTorch native post-training library☆5,517Updated last week