fawazsammani / mogrifier-lstm-pytorch
Implementation of Mogrifier LSTM in PyTorch
☆35Updated 5 years ago
Alternatives and similar repositories for mogrifier-lstm-pytorch
Users that are interested in mogrifier-lstm-pytorch are comparing it to the libraries listed below
Sorting:
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆76Updated 4 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 4 years ago
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- ☆20Updated 5 years ago
- Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms☆20Updated 3 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- A PyTorch implementation of the paper - "Synthesizer: Rethinking Self-Attention in Transformer Models"☆73Updated 2 years ago
- Sparse Attention with Linear Units☆17Updated 4 years ago
- code for Explicit Sparse Transformer☆62Updated last year
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 4 years ago
- Reversible Recurrent Neural Network Pytorch Implementation☆21Updated 7 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆31Updated 4 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆25Updated 4 years ago
- NeurIPS'19: Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting (Pytorch implementation for class imbalance).☆33Updated 5 years ago
- custom pytorch implementation of MoCo v3☆45Updated 4 years ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆23Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 4 years ago
- Code release for NeurIPS 2020 paper "Stochastic Normalization"☆23Updated 3 years ago
- Training Signal Annealing☆13Updated 5 years ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- ☆12Updated last year
- Variational Transformers for Diverse Response Generation☆81Updated 9 months ago
- ☆19Updated 6 years ago
- ☆13Updated 5 years ago
- Code for paper "Continual and Multi-Task Architecture Search (ACL 2019)"☆41Updated 5 years ago
- The project is about predicting sets (of classes) from images.☆22Updated 3 years ago
- Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021☆14Updated 3 years ago