knotgrass / attentionLinks
several types of attention modules written in PyTorch for learning purposes
☆53Updated 11 months ago
Alternatives and similar repositories for attention
Users that are interested in attention are comparing it to the libraries listed below
Sorting:
- Experiments on Multi-Head Latent Attention☆95Updated last year
- Implementation of Infini-Transformer in Pytorch☆111Updated 7 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆104Updated this week
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆77Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 5 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆179Updated last year
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 11 months ago
- PyTorch implementation of moe, which stands for mixture of experts☆47Updated 4 years ago
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆86Updated 3 weeks ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Playground for Transformers☆52Updated last year
- ☆292Updated 8 months ago
- ☆134Updated last year
- A repository for DenseSSMs☆88Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆120Updated 10 months ago
- ☆42Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆48Updated 11 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆185Updated 2 weeks ago
- Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M cont…☆84Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated last month
- ☆55Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 2 weeks ago
- ☆76Updated last week
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆56Updated last year
- Timm model explorer☆41Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated last week
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆107Updated 2 years ago