jsbaan / transformer-from-scratchLinks
Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.
☆259Updated last year
Alternatives and similar repositories for transformer-from-scratch
Users that are interested in transformer-from-scratch are comparing it to the libraries listed below
Sorting:
- Tutorial for how to build BERT from scratch☆99Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆116Updated 2 years ago
- Llama from scratch, or How to implement a paper without crying☆578Updated last year
- I will build Transformer from scratch☆81Updated last month
- LLaMA 2 implemented from scratch in PyTorch☆350Updated last year
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆308Updated 2 years ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆205Updated last year
- Attention Is All You Need | a PyTorch Tutorial to Transformers☆338Updated last year
- MinT: Minimal Transformer Library and Tutorials☆257Updated 3 years ago
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- Annotated version of the Mamba paper☆489Updated last year
- Code Transformer neural network components piece by piece☆362Updated 2 years ago
- Annotations of the interesting ML papers I read☆251Updated last month
- A set of scripts and notebooks on LLM finetunning and dataset creation☆110Updated 11 months ago
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆158Updated last year
- LLM Workshop by Sourab Mangrulkar☆394Updated last year
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆120Updated last year
- Tutorial Materials for "The Fundamentals of Modern Deep Learning with PyTorch" workshop at PyCon 2024☆248Updated last year
- The purpose of this repo is to make it easy to get started with JAX, Flax, and Haiku. It contains my "Machine Learning with JAX" series o…☆750Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆479Updated 2 years ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆71Updated last year
- Highly commented implementations of Transformers in PyTorch☆136Updated 2 years ago
- This repository contains an overview of important follow-up works based on the original Vision Transformer (ViT) by Google.☆181Updated 3 years ago
- ☆184Updated last year
- Best practices & guides on how to write distributed pytorch training code☆475Updated 6 months ago
- ☆453Updated 10 months ago
- ☆95Updated 11 months ago
- Notes on quantization in neural networks☆98Updated last year
- Puzzles for exploring transformers☆368Updated 2 years ago