lucidrains / flash-attention
Fast and memory-efficient exact attention
☆19Updated 9 months ago
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below
Sorting:
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆83Updated 3 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆103Updated 5 months ago
- Unofficial PyTorch implementation of Google's FNet: Mixing Tokens with Fourier Transforms. With checkpoints.☆74Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆213Updated 2 years ago
- Implementation of Agent Attention in Pytorch☆89Updated 10 months ago
- Unofficial PyTorch Implementation for pNLP-Mixer: an Efficient all-MLP Architecture for Language (https://arxiv.org/abs/2202.04350)☆63Updated 3 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- Axial Positional Embedding for Pytorch☆79Updated 2 months ago
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆93Updated last year
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆71Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆174Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆50Updated 3 years ago
- Implementation of Linformer for Pytorch☆286Updated last year
- Implementation of a Light Recurrent Unit in Pytorch☆46Updated 7 months ago
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆79Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆88Updated 10 months ago
- Root Mean Square Layer Normalization☆241Updated 2 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 7 months ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 3 years ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆164Updated last year
- Local Attention - Flax module for Jax☆21Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- ☆103Updated last year
- an implementation of FAdam (Fisher Adam) in PyTorch☆43Updated 11 months ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 4 years ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 4 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆74Updated 2 years ago