bkitano / llama-from-scratchLinks
Llama from scratch, or How to implement a paper without crying
☆578Updated last year
Alternatives and similar repositories for llama-from-scratch
Users that are interested in llama-from-scratch are comparing it to the libraries listed below
Sorting:
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆813Updated last month
- LLM Workshop by Sourab Mangrulkar☆394Updated last year
- Best practices for distilling large language models.☆574Updated last year
- What would you do with 1000 H100s...☆1,094Updated last year
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- ☆1,272Updated 6 months ago
- A comprehensive deep dive into the world of tokens☆226Updated last year
- LLM papers I'm reading, mostly on inference and model compression☆743Updated last year
- nanoGPT style version of Llama 3.1☆1,424Updated last year
- Best practices & guides on how to write distributed pytorch training code☆470Updated 6 months ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆718Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆347Updated last year
- An open collection of methodologies to help with successful training of large language models.☆508Updated last year
- Slides, notes, and materials for the workshop☆331Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated 2 years ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,008Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,701Updated last week
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆738Updated 10 months ago
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆2,164Updated last week
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆907Updated 4 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- The repository for the code of the UltraFastBERT paper☆517Updated last year
- The Multilayer Perceptron Language Model☆559Updated last year
- Easily embed, cluster and semantically label text datasets☆567Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,276Updated 5 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆479Updated 2 years ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆730Updated last year