leaderj1001 / Synthesizer-Rethinking-Self-Attention-Transformer-Models
Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch
☆70Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for Synthesizer-Rethinking-Self-Attention-Transformer-Models
- PyTorch implementation of Pay Attention to MLPs☆39Updated 3 years ago
- code for Explicit Sparse Transformer☆57Updated last year
- Implementation of RealFormer using pytorch☆102Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆71Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago
- [ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.☆161Updated 3 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆27Updated 4 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆86Updated 3 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆71Updated 4 years ago
- Official code for Group-Transformer (Scale down Transformer by Grouping Features for a Lightweight Character-level Language Model, COLING…☆25Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 3 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆150Updated last year
- Code for "Understanding and Improving Layer Normalization"☆46Updated 4 years ago
- custom pytorch implementation of MoCo v3☆44Updated 3 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆55Updated 3 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 4 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆32Updated 4 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆44Updated 2 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆119Updated 3 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆53Updated 3 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- a pytorch implementation of self-attention with relative position representations☆51Updated 3 years ago
- ☆22Updated 3 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆43Updated 3 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 3 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER…☆119Updated 3 years ago
- ☆32Updated 3 years ago