wuch15 / FastformerLinks
A pytorch &keras implementation and demo of Fastformer.
☆189Updated 2 years ago
Alternatives and similar repositories for Fastformer
Users that are interested in Fastformer are comparing it to the libraries listed below
Sorting:
- ☆83Updated 5 years ago
- Implementation of the Transformer variant proposed in "Transformer Quality in Linear Time"☆366Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆115Updated 3 years ago
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆84Updated 3 years ago
- Implementation of RealFormer using pytorch☆100Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆251Updated 3 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆196Updated 2 years ago
- ☆251Updated 2 years ago
- Code for the NAACL 2022 long paper "DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings"☆294Updated 2 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆71Updated 2 years ago
- Multi-head attention in PyTorch☆153Updated 6 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆194Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆57Updated 4 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆164Updated 4 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- FLASHQuad_pytorch☆67Updated 3 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- ☆19Updated 4 years ago
- Implement the paper "Self-Attention with Relative Position Representations"☆135Updated 4 years ago
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆259Updated 4 years ago
- PyTorch implementation of BERT in "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"☆107Updated 6 years ago
- Sequence modeling with Mega.☆296Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆108Updated 3 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆162Updated last year
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆65Updated 4 years ago
- A PyTorch implementation of Transformer in "Attention is All You Need"☆106Updated 4 years ago
- This is a code repository for the ACL 2022 paper "ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generati…☆34Updated 2 years ago