jsbaan / transformer-from-scratch
Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.
☆244Updated last year
Alternatives and similar repositories for transformer-from-scratch:
Users that are interested in transformer-from-scratch are comparing it to the libraries listed below
- Tutorial for how to build BERT from scratch☆92Updated 11 months ago
- I will build Transformer from scratch☆68Updated 11 months ago
- Code implementation from my blog post: https://fkodom.substack.com/p/transformers-from-scratch-in-pytorch☆94Updated last year
- Annotated version of the Mamba paper☆483Updated last year
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆182Updated 10 months ago
- Annotations of the interesting ML papers I read☆240Updated last week
- Llama from scratch, or How to implement a paper without crying☆559Updated 11 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆102Updated last year
- Interpretability for sequence generation models 🐛 🔍☆413Updated 2 weeks ago
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆111Updated last year
- Code Transformer neural network components piece by piece☆343Updated 2 years ago
- ☆85Updated 7 months ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆180Updated last year
- MinT: Minimal Transformer Library and Tutorials☆254Updated 2 years ago
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracy☆129Updated 2 years ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆124Updated last year
- Original transformer paper: Implementation of Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information process…☆237Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆108Updated 7 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆150Updated 11 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆276Updated 2 months ago
- ☆166Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Puzzles for exploring transformers☆344Updated 2 years ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,003Updated 8 months ago
- An interactive exploration of Transformer programming.☆263Updated last year
- Highly commented implementations of Transformers in PyTorch☆136Updated last year
- For optimization algorithm research and development.☆512Updated this week
- An open collection of implementation tips, tricks and resources for training large language models☆472Updated 2 years ago
- Everything you want to know about Google Cloud TPU☆527Updated 9 months ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆51Updated last year