fawazsammani / mogrifier-lstm-pytorch
Implementation of Mogrifier LSTM in PyTorch
☆35Updated 5 years ago
Alternatives and similar repositories for mogrifier-lstm-pytorch:
Users that are interested in mogrifier-lstm-pytorch are comparing it to the libraries listed below
- A quick walk-through of the innards of LSTMs and a naive implementation of the Mogrifier LSTM paper in PyTorch☆76Updated 4 years ago
- code for Explicit Sparse Transformer☆60Updated last year
- Learning to Encode Position for Transformer with Continuous Dynamical Model☆59Updated 4 years ago
- For paper《Gaussian Transformer: A Lightweight Approach for Natural Language Inference》☆28Updated 5 years ago
- Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)☆29Updated 4 years ago
- How Does Selective Mechanism Improve Self-attention Networks?☆27Updated 4 years ago
- ☆20Updated 5 years ago
- A pytorch realization of adafactor (https://arxiv.org/pdf/1804.04235.pdf )☆23Updated 5 years ago
- Implementing SYNTHESIZER: Rethinking Self-Attention in Transformer Models using Pytorch☆70Updated 4 years ago
- Pytorch implementation of Performer from the paper "Rethinking Attention with Performers".☆25Updated 4 years ago
- Code for the ACL2020 paper Character-Level Translation with Self-Attention☆32Updated 4 years ago
- ☆13Updated 5 years ago
- ☆39Updated 4 years ago
- NeurIPS'19: Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting (Pytorch implementation for class imbalance).☆33Updated 5 years ago
- Code for paper 'Minimizing FLOPs to Learn Efficient Sparse Representations' published at ICLR 2020☆20Updated 5 years ago
- Code for "Understanding and Improving Layer Normalization"☆46Updated 5 years ago
- Reversible Recurrent Neural Network Pytorch Implementation☆21Updated 7 years ago
- Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms☆20Updated 3 years ago
- Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".☆44Updated 3 years ago
- Sparse Attention with Linear Units☆17Updated 4 years ago
- ☆19Updated 3 years ago
- Mask Attention Networks: Rethinking and Strengthen Transformer in NAACL2021☆14Updated 3 years ago
- the source code of Multi-modal Circulant Fusion (MCF) for Temporal Activity Localization☆23Updated 6 years ago
- ☆23Updated 4 years ago
- Reproducing the Linear Multihead Attention introduced in Linformer paper (Linformer: Self-Attention with Linear Complexity)☆76Updated 4 years ago
- Chainer Implementation of TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning☆56Updated 5 years ago
- ☆13Updated 5 years ago
- Official Pytorch Implementation for the paper 'SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients'☆17Updated 3 years ago
- Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch☆57Updated 4 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆55Updated 3 years ago