jsbaan / transformer-from-scratchLinks
Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.
☆270Updated last year
Alternatives and similar repositories for transformer-from-scratch
Users that are interested in transformer-from-scratch are comparing it to the libraries listed below
Sorting:
- I will build Transformer from scratch☆86Updated 3 months ago
- Llama from scratch, or How to implement a paper without crying☆580Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆117Updated 2 years ago
- Tutorial for how to build BERT from scratch☆101Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆321Updated 2 years ago
- Code Transformer neural network components piece by piece☆367Updated 2 years ago
- Annotations of the interesting ML papers I read☆263Updated last month
- Code implementation from my blog post: https://fkodom.substack.com/p/transformers-from-scratch-in-pytorch☆95Updated 2 years ago
- LLaMA 2 implemented from scratch in PyTorch☆358Updated 2 years ago
- ☆189Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆160Updated last year
- ☆99Updated last year
- https://slds-lmu.github.io/seminar_multimodal_dl/☆171Updated 2 years ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆219Updated last year
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆140Updated 9 months ago
- LoRA and DoRA from Scratch Implementations☆211Updated last year
- Annotated version of the Mamba paper☆490Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated last year
- A numpy implementation of the Transformer model in "Attention is All You Need"☆58Updated last year
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆122Updated last year
- MinT: Minimal Transformer Library and Tutorials☆259Updated 3 years ago
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆126Updated 2 years ago
- Attention Is All You Need | a PyTorch Tutorial to Transformers☆355Updated last year
- LLM Workshop by Sourab Mangrulkar☆395Updated last year
- Fast bare-bones BPE for modern tokenizer training☆168Updated 4 months ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated 2 years ago
- Slides, notes, and materials for the workshop☆334Updated last year
- Tutorials for Triton, a language for writing gpu kernels☆56Updated 2 years ago