jsbaan / transformer-from-scratchLinks
Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.
☆282Updated last year
Alternatives and similar repositories for transformer-from-scratch
Users that are interested in transformer-from-scratch are comparing it to the libraries listed below
Sorting:
- Llama from scratch, or How to implement a paper without crying☆584Updated last year
- I will build Transformer from scratch☆84Updated 6 months ago
- Tutorial for how to build BERT from scratch☆102Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- LLaMA 2 implemented from scratch in PyTorch☆366Updated 2 years ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆236Updated last year
- Annotations of the interesting ML papers I read☆275Updated last month
- Code Transformer neural network components piece by piece☆373Updated 2 years ago
- Annotated version of the Mamba paper☆496Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆337Updated 2 years ago
- Code implementation from my blog post: https://fkodom.substack.com/p/transformers-from-scratch-in-pytorch☆97Updated 2 years ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆116Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 11 months ago
- ☆190Updated 2 years ago
- MinT: Minimal Transformer Library and Tutorials☆260Updated 3 years ago
- LLM Workshop by Sourab Mangrulkar☆401Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Attention Is All You Need | a PyTorch Tutorial to Transformers☆362Updated last year
- A walkthrough of transformer architecture code☆370Updated last year
- Original transformer paper: Implementation of Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information process…☆243Updated last year
- Distributed training (multi-node) of a Transformer model☆94Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- Best practices & guides on how to write distributed pytorch training code☆576Updated 3 months ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆260Updated 2 years ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆129Updated 2 years ago
- Implements Low-Rank Adaptation(LoRA) Finetuning from scratch☆81Updated 2 years ago
- A numpy implementation of the Transformer model in "Attention is All You Need"☆58Updated last year
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆122Updated last year