wilile26811249 / Fastformer-PyTorch
Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."
☆134Updated 3 years ago
Alternatives and similar repositories for Fastformer-PyTorch:
Users that are interested in Fastformer-PyTorch are comparing it to the libraries listed below
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆173Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆152Updated last year
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 2 years ago
- A pytorch &keras implementation and demo of Fastformer.☆188Updated 2 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 3 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 4 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆119Updated 3 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- Implementation of Online Label Smoothing in PyTorch☆94Updated 2 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆235Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 2 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆160Updated last year
- Axial Positional Embedding for Pytorch☆77Updated last month
- Implementation of the GBST block from the Charformer paper, in Pytorch☆116Updated 3 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆70Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆146Updated 3 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 3 years ago
- ☆218Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆131Updated 3 weeks ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 4 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆225Updated last year
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆131Updated 10 months ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆73Updated 2 years ago
- PyTorch Examples repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆62Updated 8 months ago
- Relative Positional Encoding for Transformers with Linear Complexity☆63Updated 3 years ago