knotgrass / attentionLinks
several types of attention modules written in PyTorch for learning purposes
☆52Updated last year
Alternatives and similar repositories for attention
Users that are interested in attention are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of moe, which stands for mixture of experts☆49Updated 4 years ago
- Implementation of Infini-Transformer in Pytorch☆113Updated 9 months ago
- Experiments on Multi-Head Latent Attention☆97Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆180Updated last year
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆88Updated this week
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated last year
- ☆292Updated 9 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆78Updated 2 years ago
- Playground for Transformers☆53Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆106Updated this week
- A repository for DenseSSMs☆88Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆99Updated last year
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year
- ☆134Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last week
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆25Updated 5 months ago
- Root Mean Square Layer Normalization☆254Updated 2 years ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆177Updated 6 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 9 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆120Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated last week
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated 11 months ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆57Updated last year
- ☆42Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆108Updated 2 years ago
- ☆73Updated 8 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆109Updated this week
- State Space Models☆70Updated last year
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆94Updated 7 months ago