lucidrains / FLASH-pytorch
Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"
☆350Updated last year
Related projects ⓘ
Alternatives and complementary repositories for FLASH-pytorch
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆179Updated last year
- Rotary Transformer☆822Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆507Updated last year
- An implementation of local windowed attention for language modeling☆384Updated 2 months ago
- RoFormer V1 & V2 pytorch☆474Updated 2 years ago
- Implementation of Linformer for Pytorch☆257Updated 10 months ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆222Updated 2 years ago
- Implementation of Rotary Embeddings, from the Roformer paper, in Pytorch☆571Updated last week
- Sequence modeling with Mega.☆298Updated last year
- ☆870Updated 5 months ago
- A pytorch &keras implementation and demo of Fastformer.☆187Updated 2 years ago
- A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models☆642Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆698Updated 6 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆360Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated last year
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆251Updated 3 years ago
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆228Updated 2 years ago
- PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538☆983Updated 7 months ago
- My take on a practical implementation of Linformer for Pytorch.☆407Updated 2 years ago
- ☆181Updated 11 months ago
- About Code release for "Flowformer: Linearizing Transformers with Conservation Flows" (ICML 2022), https://arxiv.org/pdf/2202.06258.pdf☆304Updated 4 months ago
- Fully featured implementation of Routing Transformer☆284Updated 3 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆124Updated 3 years ago
- ACL'2023: DiffusionBERT: Improving Generative Masked Language Models with Diffusion Models☆296Updated 9 months ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆203Updated last year
- The pure and clear PyTorch Distributed Training Framework.☆275Updated 9 months ago
- ☆574Updated last week
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆192Updated last year
- Implementation of Block Recurrent Transformer - Pytorch☆213Updated 3 months ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆735Updated this week