AkiRusProd / numpy-transformerLinks
A numpy implementation of the Transformer model in "Attention is All You Need"
☆58Updated last year
Alternatives and similar repositories for numpy-transformer
Users that are interested in numpy-transformer are comparing it to the libraries listed below
Sorting:
- Сustom torch style machine learning framework with automatic differentiation implemented on numpy, allows build GANs, VAEs, etc.☆81Updated 3 weeks ago
- Notes on quantization in neural networks☆117Updated 2 years ago
- ☆236Updated last year
- Well documented, unit tested, type checked and formatted implementation of a vanilla transformer - for educational purposes.☆282Updated last year
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆146Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- ☆178Updated 2 years ago
- Distributed training (multi-node) of a Transformer model☆93Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- LLaMA 2 implemented from scratch in PyTorch☆366Updated 2 years ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆162Updated 2 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆112Updated 2 years ago
- Custom kernels in Triton language for accelerating LLMs☆27Updated last year
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.☆122Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 11 months ago
- Tutorial for how to build BERT from scratch☆102Updated last year
- Playground for Transformers☆53Updated 2 years ago
- Prune transformer layers☆74Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆135Updated last year
- I will build Transformer from scratch☆84Updated 6 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆198Updated 8 months ago
- making the official triton tutorials actually comprehensible☆111Updated 5 months ago
- ☆232Updated 2 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Updated last year
- Tutorials for Triton, a language for writing gpu kernels☆73Updated 2 years ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆235Updated last year
- Slides, notes, and materials for the workshop☆339Updated last year
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆53Updated last year