AkiRusProd / numpy-transformerLinks
A numpy implementation of the Transformer model in "Attention is All You Need"
☆56Updated 11 months ago
Alternatives and similar repositories for numpy-transformer
Users that are interested in numpy-transformer are comparing it to the libraries listed below
Sorting:
- LLaMA 2 implemented from scratch in PyTorch☆337Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆112Updated last year
- Notes on quantization in neural networks☆90Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆254Updated last year
- Playground for Transformers☆51Updated last year
- several types of attention modules written in PyTorch for learning purposes☆54Updated 9 months ago
- ☆180Updated 6 months ago
- Simple Adaptation of BitNet☆32Updated last year
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆134Updated 8 months ago
- ☆161Updated last year
- making the official triton tutorials actually comprehensible☆48Updated 3 months ago
- ML/DL Math and Method notes☆61Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆104Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆150Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆120Updated last year
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆50Updated last year
- LoRA and DoRA from Scratch Implementations☆206Updated last year
- Distributed training (multi-node) of a Transformer model☆72Updated last year
- Tutorial for how to build BERT from scratch☆95Updated last year
- I will build Transformer from scratch☆71Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 3 months ago
- A repository for log-time feedforward networks☆222Updated last year
- Best practices & guides on how to write distributed pytorch training code☆450Updated 4 months ago
- Notes about LLaMA 2 model☆63Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- Train and evaluate 1.58 bits Neural Networks☆26Updated last year
- Pytorch (Lightning) implementation of the Mamba model☆29Updated 2 months ago
- Google TPU optimizations for transformers models☆116Updated 5 months ago
- An extension of the nanoGPT repository for training small MOE models.☆162Updated 4 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆206Updated this week