bkitano / llama-from-scratchLinks
Llama from scratch, or How to implement a paper without crying
☆580Updated last year
Alternatives and similar repositories for llama-from-scratch
Users that are interested in llama-from-scratch are comparing it to the libraries listed below
Sorting:
- LLM Workshop by Sourab Mangrulkar☆394Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆822Updated 3 months ago
- Best practices for distilling large language models.☆583Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆714Updated 2 years ago
- LLM papers I'm reading, mostly on inference and model compression☆743Updated last year
- What would you do with 1000 H100s...☆1,121Updated last year
- A comprehensive deep dive into the world of tokens☆226Updated last year
- An open collection of methodologies to help with successful training of large language models.☆538Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆357Updated 2 years ago
- Best practices & guides on how to write distributed pytorch training code☆530Updated 3 weeks ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆268Updated last year
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,013Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated last year
- Fast bare-bones BPE for modern tokenizer training☆168Updated 4 months ago
- Automatically evaluate your LLMs in Google Colab☆664Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆926Updated last week
- ☆1,309Updated 8 months ago
- An open collection of implementation tips, tricks and resources for training large language models☆485Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆729Updated last year
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆760Updated last year
- The repository for the code of the UltraFastBERT paper☆518Updated last year
- ☆575Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆604Updated last year
- Slides, notes, and materials for the workshop☆334Updated last year
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆290Updated 8 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- Easily embed, cluster and semantically label text datasets☆584Updated last year
- batched loras☆347Updated 2 years ago
- Puzzles for exploring transformers☆376Updated 2 years ago