karpathy / nano-llama31Links
nanoGPT style version of Llama 3.1
☆1,432Updated last year
Alternatives and similar repositories for nano-llama31
Users that are interested in nano-llama31 are comparing it to the libraries listed below
Sorting:
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,609Updated 5 months ago
- NanoGPT (124M) in 3 minutes☆3,176Updated 2 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,846Updated last month
- The Autograd Engine☆636Updated last year
- The n-gram Language Model☆1,446Updated last year
- The Multilayer Perceptron Language Model☆568Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,374Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆2,252Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆916Updated 5 months ago
- DataComp for Language Models☆1,371Updated last month
- The Tensor (or Array)☆449Updated last year
- Code for BLT research paper☆1,989Updated 4 months ago
- UNet diffusion model in pure CUDA☆649Updated last year
- A PyTorch native platform for training generative AI models☆4,525Updated this week
- Llama from scratch, or How to implement a paper without crying☆580Updated last year
- Video+code lecture on building nanoGPT from scratch☆4,423Updated last year
- Recipes to scale inference-time compute of open models☆1,109Updated 4 months ago
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆751Updated 11 months ago
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,611Updated last month
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆818Updated 2 months ago
- Official repository for our work on micro-budget training of large-scale diffusion models.☆1,514Updated 9 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,612Updated last year
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆536Updated last week
- OLMoE: Open Mixture-of-Experts Language Models☆878Updated 3 weeks ago
- Muon is Scalable for LLM Training☆1,325Updated 2 months ago
- Textbook on reinforcement learning from human feedback☆1,259Updated 2 weeks ago
- Muon is an optimizer for hidden layers in neural networks☆1,827Updated 3 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,056Updated last year
- System 2 Reasoning Link Collection☆856Updated 7 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,009Updated this week