AkiRusProd / numpy-transformer
A numpy implementation of the Transformer model in "Attention is All You Need"
☆54Updated 8 months ago
Alternatives and similar repositories for numpy-transformer:
Users that are interested in numpy-transformer are comparing it to the libraries listed below
- ☆153Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆100Updated last year
- ML/DL Math and Method notes☆60Updated last year
- Notes on quantization in neural networks☆79Updated last year
- Custom kernels in Triton language for accelerating LLMs☆18Updated last year
- 🧠 A study guide to learn about Transformers☆11Updated last year
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆173Updated this week
- Cataloging released Triton kernels.☆216Updated 3 months ago
- Prune transformer layers☆68Updated 10 months ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆178Updated 10 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 7 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆91Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆63Updated last year
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆156Updated 3 weeks ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆102Updated last year
- Fast low-bit matmul kernels in Triton☆288Updated this week
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆241Updated 11 months ago
- Tutorial for how to build BERT from scratch☆91Updated 10 months ago
- An extension of the nanoGPT repository for training small MOE models.☆123Updated last month
- Normalized Transformer (nGPT)☆167Updated 4 months ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆51Updated last year
- 100 days of building GPU kernels!☆336Updated this week
- several types of attention modules written in PyTorch for learning purposes☆50Updated 6 months ago
- Collection of kernels written in Triton language☆118Updated last week
- ☆48Updated last year
- Fast Hadamard transform in CUDA, with a PyTorch interface☆168Updated 10 months ago
- ☆153Updated 3 months ago
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆82Updated 11 months ago
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆111Updated last year
- Mixed precision training from scratch with Tensors and CUDA☆22Updated 11 months ago