jsbaan / transformer-from-scratchLinks
Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.
☆274Updated last year
Alternatives and similar repositories for transformer-from-scratch
Users that are interested in transformer-from-scratch are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆100Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆118Updated 2 years ago
- Llama from scratch, or How to implement a paper without crying☆581Updated last year
- Annotations of the interesting ML papers I read☆269Updated 2 months ago
- Attention Is All You Need | a PyTorch Tutorial to Transformers☆359Updated last year
- I will build Transformer from scratch☆85Updated 5 months ago
- Annotated version of the Mamba paper☆492Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆363Updated 2 years ago
- LoRA and DoRA from Scratch Implementations☆214Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆161Updated last month
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆227Updated last year
- Puzzles for exploring transformers☆380Updated 2 years ago
- Slides, notes, and materials for the workshop☆337Updated last year
- Distributed training (multi-node) of a Transformer model☆90Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆112Updated last year
- ☆188Updated last year
- A numpy implementation of the Transformer model in "Attention is All You Need"☆58Updated last year
- Best practices & guides on how to write distributed pytorch training code☆553Updated 2 months ago
- ☆460Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- LLM Workshop by Sourab Mangrulkar☆398Updated last year
- ☆99Updated last year
- Implementation of the first paper on word2vec☆246Updated 3 years ago
- An open collection of implementation tips, tricks and resources for training large language models☆490Updated 2 years ago
- MinT: Minimal Transformer Library and Tutorials☆260Updated 3 years ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆333Updated 2 years ago
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆122Updated last year
- Implements Low-Rank Adaptation(LoRA) Finetuning from scratch☆82Updated 2 years ago
- Code Transformer neural network components piece by piece☆369Updated 2 years ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 4 months ago