wuch15 / FastformerLinks
A pytorch &keras implementation and demo of Fastformer.
☆190Updated 3 years ago
Alternatives and similar repositories for Fastformer
Users that are interested in Fastformer are comparing it to the libraries listed below
Sorting:
- ☆84Updated 5 years ago
- Implementation of RealFormer using pytorch☆101Updated 4 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆133Updated 4 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆369Updated 2 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- ☆19Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆252Updated 3 years ago
- ☆254Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆177Updated 4 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆118Updated 4 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆84Updated 3 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆196Updated 2 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆109Updated 6 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆166Updated 4 years ago
- Multi-head attention in PyTorch☆154Updated 6 years ago
- Sequence modeling with Mega.☆300Updated 2 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆197Updated 2 years ago
- FLASHQuad_pytorch☆68Updated 3 years ago
- Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"☆296Updated 3 years ago
- Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力…☆130Updated 4 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆139Updated 4 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Official code for ICLR 2022 paper: "PoNet: Pooling Network for Efficient Token Mixing in Long Sequences".☆33Updated 2 years ago
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆243Updated 3 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year