knotgrass / attention
several types of attention modules written in PyTorch for learning purposes
☆51Updated 7 months ago
Alternatives and similar repositories for attention
Users that are interested in attention are comparing it to the libraries listed below
Sorting:
- Experiments on Multi-Head Latent Attention☆89Updated 8 months ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 4 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆164Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆53Updated last month
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11Updated 11 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 7 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆72Updated last year
- ☆103Updated last year
- A repository for DenseSSMs☆87Updated last year
- ☆11Updated 7 months ago
- Implementation of Agent Attention in Pytorch☆89Updated 10 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆91Updated this week
- ☆42Updated last year
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆66Updated 10 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Playground for Transformers☆50Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆41Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆46Updated 2 years ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated last year
- PyTorch implementation of moe, which stands for mixture of experts☆43Updated 4 years ago
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆81Updated last month
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆53Updated 11 months ago
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆105Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- ☆134Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 8 months ago
- code for the ddp tutorial☆32Updated 3 years ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆117Updated 11 months ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆51Updated last year