jsbaan / transformer-from-scratchLinks
Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.
☆253Updated last year
Alternatives and similar repositories for transformer-from-scratch
Users that are interested in transformer-from-scratch are comparing it to the libraries listed below
Sorting:
- Annotated version of the Mamba paper☆485Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆68Updated last year
- Llama from scratch, or How to implement a paper without crying☆567Updated last year
- Tutorial for how to build BERT from scratch☆94Updated last year
- Puzzles for exploring transformers☆350Updated 2 years ago
- Code implementation from my blog post: https://fkodom.substack.com/p/transformers-from-scratch-in-pytorch☆92Updated last year
- I will build Transformer from scratch☆70Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆150Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆335Updated last year
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆185Updated last year
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆134Updated 7 months ago
- ☆174Updated 5 months ago
- An extension of the nanoGPT repository for training small MOE models.☆152Updated 3 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆191Updated last year
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆115Updated last year
- The Tensor (or Array)☆436Updated 10 months ago
- LoRA and DoRA from Scratch Implementations☆204Updated last year
- An interactive exploration of Transformer programming.☆264Updated last year
- ☆159Updated last year
- ☆435Updated 8 months ago
- Best practices & guides on how to write distributed pytorch training code☆441Updated 4 months ago
- Notes on quantization in neural networks☆86Updated last year
- The Multilayer Perceptron Language Model☆554Updated 10 months ago
- Fast bare-bones BPE for modern tokenizer training☆159Updated 2 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆364Updated 3 months ago
- Alex Krizhevsky's original code from Google Code☆192Updated 9 years ago
- Implementation of BERT-based Language Models☆19Updated last year
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆104Updated 2 years ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆185Updated 3 weeks ago