karpathy / nano-llama31
nanoGPT style version of Llama 3.1
☆1,341Updated 7 months ago
Alternatives and similar repositories for nano-llama31:
Users that are interested in nano-llama31 are comparing it to the libraries listed below
- NanoGPT (124M) in 3 minutes☆2,403Updated this week
- Code for BLT research paper☆1,436Updated this week
- The Multilayer Perceptron Language Model☆543Updated 7 months ago
- A PyTorch native library for large model training☆3,470Updated this week
- Minimalistic large language model 3D-parallelism training☆1,701Updated this week
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆855Updated last month
- Minimalistic 4D-parallelism distributed training framework for education purpose☆948Updated 2 weeks ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,266Updated last month
- The Autograd Engine☆583Updated 6 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,313Updated this week
- The n-gram Language Model☆1,402Updated 7 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,312Updated this week
- DataComp for Language Models☆1,263Updated this week
- Bringing BERT into modernity via both architecture changes and scaling☆1,283Updated this week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,445Updated last month
- Training Large Language Model to Reason in a Continuous Latent Space☆985Updated last month
- Everything about the SmolLM2 and SmolVLM family of models☆2,035Updated last week
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆676Updated 4 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,481Updated last year
- Recipes to scale inference-time compute of open models☆1,041Updated 3 weeks ago
- Official repository for our work on micro-budget training of large-scale diffusion models.☆1,356Updated 2 months ago
- PyTorch native quantization and sparsity for training and inference☆1,913Updated this week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,372Updated 11 months ago
- AllenAI's post-training codebase☆2,827Updated this week
- The Tensor (or Array)☆427Updated 7 months ago
- Recipes for shrinking, optimizing, customizing cutting edge vision models. 💜☆1,316Updated last week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,568Updated this week
- Tools for merging pretrained large language models.☆5,458Updated this week