bkitano / llama-from-scratchLinks
Llama from scratch, or How to implement a paper without crying
☆585Updated last year
Alternatives and similar repositories for llama-from-scratch
Users that are interested in llama-from-scratch are comparing it to the libraries listed below
Sorting:
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 6 months ago
- LLM Workshop by Sourab Mangrulkar☆401Updated last year
- Best practices for distilling large language models.☆602Updated last year
- LLM papers I'm reading, mostly on inference and model compression☆751Updated 2 years ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,018Updated last year
- What would you do with 1000 H100s...☆1,145Updated 2 years ago
- ☆1,343Updated 11 months ago
- A comprehensive deep dive into the world of tokens☆227Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆724Updated 2 years ago
- An open collection of methodologies to help with successful training of large language models.☆550Updated last year
- Best practices & guides on how to write distributed pytorch training code☆571Updated 3 months ago
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆280Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆114Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆493Updated 2 years ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆790Updated last year
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,655Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago
- Automatically evaluate your LLMs in Google Colab☆683Updated last year
- Minimalistic large language model 3D-parallelism training☆2,497Updated last month
- LLaMA 2 implemented from scratch in PyTorch☆365Updated 2 years ago
- A minimum example of aligning language models with RLHF similar to ChatGPT☆224Updated 2 years ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆616Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- A repository for research on medium sized language models.☆528Updated 7 months ago
- Fast bare-bones BPE for modern tokenizer training☆174Updated 7 months ago
- A bagel, with everything.☆326Updated last year