lucidrains / FLASH-pytorchView external linksLinks
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"
☆372Sep 26, 2023Updated 2 years ago
Alternatives and similar repositories for FLASH-pytorch
Users that are interested in FLASH-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- FLASHQuad_pytorch☆68Apr 1, 2022Updated 3 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Feb 24, 2023Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Jul 16, 2023Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Apr 6, 2022Updated 3 years ago
- ☆106Mar 9, 2024Updated last year
- Implementation of ETSformer, state of the art time-series Transformer, in Pytorch☆155Aug 26, 2023Updated 2 years ago
- Pytorch library for fast transformer implementations☆1,761Mar 23, 2023Updated 2 years ago
- Fully featured implementation of Routing Transformer☆300Nov 6, 2021Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Aug 4, 2021Updated 4 years ago
- GAU-alpha-pytorch☆20May 11, 2022Updated 3 years ago
- Implementation of Retrieval-Augmented Denoising Diffusion Probabilistic Models in Pytorch☆66May 5, 2022Updated 3 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51May 10, 2022Updated 3 years ago
- Official Pytorch Implementation for "Continual Transformers: Redundancy-Free Attention for Online Inference" [ICLR 2023]☆28Oct 16, 2023Updated 2 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,172Feb 2, 2022Updated 4 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆82Oct 30, 2021Updated 4 years ago
- An implementation of local windowed attention for language modeling☆496Jul 16, 2025Updated 6 months ago
- ☆201Dec 4, 2023Updated 2 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆224Aug 20, 2024Updated last year
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,800Feb 7, 2026Updated last week
- 🦁 Lion, new optimizer discovered by Google Brain using genetic algorithms that is purportedly better than Adam(w), in Pytorch☆2,184Nov 27, 2024Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- ☆12Jan 17, 2024Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46May 23, 2023Updated 2 years ago
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆804Jan 30, 2026Updated 2 weeks ago
- Implementation for paper "Disentangled Speech Representation Learning for One-Shot Cross-Lingual Voice Conversion Using ß-VAE"☆44Apr 10, 2023Updated 2 years ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆92Jun 18, 2024Updated last year
- Implementation of Perceiver AR, Deepmind's new long-context attention network based on Perceiver architecture, in Pytorch☆94Apr 10, 2023Updated 2 years ago
- ☆24Sep 25, 2024Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆827May 5, 2024Updated last year
- Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorc…☆171Nov 25, 2022Updated 3 years ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Dec 27, 2024Updated last year
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆201Mar 24, 2021Updated 4 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,193Jun 21, 2023Updated 2 years ago
- Implementation of Block Recurrent Transformer - Pytorch☆224Aug 20, 2024Updated last year