wuch15 / Fastformer
A pytorch &keras implementation and demo of Fastformer.
☆187Updated 2 years ago
Alternatives and similar repositories for Fastformer:
Users that are interested in Fastformer are comparing it to the libraries listed below
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆360Updated last year
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆250Updated 3 years ago
- Implementation of Fast Transformer in Pytorch☆173Updated 3 years ago
- ☆83Updated 5 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 2 years ago
- FLASHQuad_pytorch☆67Updated 2 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆195Updated last year
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆107Updated 3 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 3 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆413Updated 2 years ago
- Sequence modeling with Mega.☆295Updated 2 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆70Updated last year
- An implementation of local windowed attention for language modeling☆429Updated 2 months ago
- Fully featured implementation of Routing Transformer☆291Updated 3 years ago
- ☆216Updated 4 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆127Updated 4 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆224Updated last year
- Implementation of Linformer for Pytorch☆274Updated last year
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆260Updated 3 years ago
- An implementation of masked language modeling for Pytorch, made as concise and simple as possible☆178Updated last year
- pytorch; mask language model ; bert☆72Updated 5 years ago
- ☆251Updated 2 years ago
- 一些RNN的实现☆49Updated last year
- [NeurIPS'22 Spotlight] A Contrastive Framework for Neural Text Generation☆471Updated last year
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆160Updated last year
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆163Updated 4 years ago