leaderj1001 / Synthesizer-Rethinking-Self-Attention-Transformer-Models
Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch
☆70Updated 4 years ago
Alternatives and similar repositories for Synthesizer-Rethinking-Self-Attention-Transformer-Models:
Users that are interested in Synthesizer-Rethinking-Self-Attention-Transformer-Models are comparing it to the libraries listed below
- code for Explicit Sparse Transformer☆60Updated last year
- Implementation of RealFormer using pytorch☆100Updated 4 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- Implementation of Mogrifier LSTM in PyTorch☆35Updated 5 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆152Updated last year
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 4 years ago
- Official code for Group-Transformer (Scale down Transformer by Grouping Features for a Lightweight Character-level Language Model, COLING…☆25Updated 4 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 3 years ago
- a pytorch implementation of self-attention with relative position representations☆50Updated 4 years ago
- custom pytorch implementation of MoCo v3☆45Updated 4 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago
- Curriculum Learning related papers and materials☆54Updated 4 years ago
- ☆83Updated 5 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 4 years ago
- Code for paper "Continual and Multi-Task Architecture Search (ACL 2019)"☆41Updated 5 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": UNITER…☆119Updated 4 years ago
- a simple pytorch implement of Multi-Sample Dropout☆57Updated 5 years ago
- This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron …☆32Updated last year
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆119Updated 3 years ago
- Multi-head attention in PyTorch☆151Updated 6 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆32Updated 4 years ago
- ☆22Updated 3 years ago