lucidrains / flash-attentionLinks
Fast and memory-efficient exact attention
☆19Updated last year
Alternatives and similar repositories for flash-attention
Users that are interested in flash-attention are comparing it to the libraries listed below
Sorting:
- Unofficial PyTorch implementation of Google's FNet: Mixing Tokens with Fourier Transforms. With checkpoints.☆76Updated 3 years ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆123Updated 9 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated last year
- Root Mean Square Layer Normalization☆254Updated 2 years ago
- Implementation of Agent Attention in Pytorch☆91Updated last year
- Implementation of "Attention Is Off By One" by Evan Miller☆196Updated 2 years ago
- Implementation of Linformer for Pytorch☆298Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆90Updated 3 months ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆179Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆62Updated 2 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆121Updated 11 months ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆369Updated last year
- Implementation of Fast Transformer in Pytorch☆176Updated 4 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆205Updated 2 years ago
- In this repository, we explore model compression for transformer architectures via quantization. We specifically explore quantization awa…☆24Updated 4 years ago
- an implementation of FAdam (Fisher Adam) in PyTorch☆49Updated 2 months ago
- Sequence modeling with Mega.☆300Updated 2 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- ☆293Updated 9 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆48Updated 11 months ago
- ☆106Updated last year
- Implementation of GateLoop Transformer in Pytorch and Jax☆90Updated last year
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆95Updated 2 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆229Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆109Updated 2 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆70Updated 2 years ago
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last week