DSE-MSU / R-transformer
Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.
☆226Updated 5 years ago
Alternatives and similar repositories for R-transformer:
Users that are interested in R-transformer are comparing it to the libraries listed below
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆132Updated 5 years ago
- LAnguage Modelling Benchmarks☆137Updated 4 years ago
- Implementation of Universal Transformer in Pytorch☆259Updated 6 years ago
- ☆213Updated 4 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆244Updated 4 years ago
- Transformer with Untied Positional Encoding (TUPE). Code of paper "Rethinking Positional Encoding in Language Pre-training". Improve exis…☆250Updated 3 years ago
- [ICLR'19] Trellis Networks for Sequence Modeling☆472Updated 5 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆93Updated 4 years ago
- This repository contain various types of attention mechanism like Bahdanau , Soft attention , Additive Attention , Hierarchical Attention…☆125Updated 3 years ago
- PyTorch implementation of batched bi-RNN encoder and attention-decoder.☆280Updated 6 years ago
- ☆83Updated 5 years ago
- A wrapper layer for stacking layers horizontally☆228Updated 2 years ago
- My take on a practical implementation of Linformer for Pytorch.☆409Updated 2 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆136Updated 3 years ago
- Implements Reformer: The Efficient Transformer in pytorch.☆84Updated 4 years ago
- Fully featured implementation of Routing Transformer☆288Updated 3 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated last year
- Code for the paper "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks"☆577Updated 5 years ago
- PyTorch implementation of beam search decoding for seq2seq models☆338Updated last year
- [ICLR 2020] Lite Transformer with Long-Short Range Attention☆602Updated 6 months ago
- PyTorch DataLoader for seq2seq☆84Updated 5 years ago
- pytorch implementation of Attention is all you need☆240Updated 3 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆693Updated 4 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆86Updated last year
- Pytorch Implementation of ALBERT(A Lite BERT for Self-supervised Learning of Language Representations)☆226Updated 3 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆256Updated 3 years ago
- Multi-head attention in PyTorch☆149Updated 5 years ago
- The Annotated Encoder Decoder with Attention☆166Updated 3 years ago
- A pytorch implementation of the paper: "Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks"☆81Updated 6 years ago