bkitano / llama-from-scratch
Llama from scratch, or How to implement a paper without crying
☆550Updated 9 months ago
Alternatives and similar repositories for llama-from-scratch:
Users that are interested in llama-from-scratch are comparing it to the libraries listed below
- What would you do with 1000 H100s...☆1,021Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,481Updated last year
- Minimalistic large language model 3D-parallelism training☆1,715Updated this week
- LLaMA 2 implemented from scratch in PyTorch☆307Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,374Updated 11 months ago
- LLM Workshop by Sourab Mangrulkar☆368Updated 9 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,218Updated 2 weeks ago
- Best practices for distilling large language models.☆506Updated last year
- LoRA and DoRA from Scratch Implementations☆198Updated last year
- ☆502Updated 4 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆783Updated 3 weeks ago
- ☆1,100Updated 3 weeks ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆709Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,451Updated 11 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆548Updated 2 months ago
- A comprehensive deep dive into the world of tokens☆221Updated 9 months ago
- An open collection of methodologies to help with successful training of large language models.☆480Updated last year
- nanoGPT style version of Llama 3.1☆1,346Updated 7 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆770Updated this week
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆574Updated 8 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆817Updated 2 weeks ago
- The repository for the code of the UltraFastBERT paper☆517Updated last year
- Fast bare-bones BPE for modern tokenizer training☆149Updated 5 months ago
- A bibliography and survey of the papers surrounding o1☆1,182Updated 4 months ago
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆682Updated 4 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,000Updated 7 months ago
- Annotated version of the Mamba paper☆477Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,040Updated 10 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆691Updated 11 months ago