bkitano / llama-from-scratchLinks
Llama from scratch, or How to implement a paper without crying
☆578Updated last year
Alternatives and similar repositories for llama-from-scratch
Users that are interested in llama-from-scratch are comparing it to the libraries listed below
Sorting:
- LLM Workshop by Sourab Mangrulkar☆394Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆814Updated last month
- Best practices for distilling large language models.☆574Updated last year
- ☆1,279Updated 6 months ago
- What would you do with 1000 H100s...☆1,095Updated last year
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆718Updated last year
- LLM papers I'm reading, mostly on inference and model compression☆742Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆348Updated last year
- Best practices & guides on how to write distributed pytorch training code☆475Updated 6 months ago
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆739Updated 10 months ago
- An open collection of methodologies to help with successful training of large language models.☆511Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆259Updated last year
- nanoGPT style version of Llama 3.1☆1,423Updated last year
- Automatically evaluate your LLMs in Google Colab☆658Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,007Updated last year
- Slides, notes, and materials for the workshop☆332Updated last year
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,400Updated last year
- A comprehensive deep dive into the world of tokens☆226Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆125Updated 2 years ago
- The Multilayer Perceptron Language Model☆559Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,589Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆599Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,720Updated 2 weeks ago
- LLM (Large Language Model) FineTuning☆560Updated 5 months ago
- batched loras☆345Updated 2 years ago
- Easily embed, cluster and semantically label text datasets☆567Updated last year