keitakurita / Better_LSTM_PyTorchLinks
An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.
☆134Updated 5 years ago
Alternatives and similar repositories for Better_LSTM_PyTorch
Users that are interested in Better_LSTM_PyTorch are comparing it to the libraries listed below
Sorting:
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆137Updated 4 years ago
- The Annotated Encoder Decoder with Attention☆166Updated 4 years ago
- LAnguage Modelling Benchmarks☆138Updated 5 years ago
- Pytorch implementation of R-Transformer. Some parts of the code are adapted from the implementation of TCN and Transformer.☆230Updated 6 years ago
- Minimal RNN classifier with self-attention in Pytorch☆150Updated 3 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- A PyTorch implementation of the Transformer model from "Attention Is All You Need".☆59Updated 6 years ago
- Sequence to Sequence Models in PyTorch☆44Updated last year
- Text Generation Using A Variational Autoencoder☆110Updated 8 years ago
- ☆76Updated 5 years ago
- Two-Layer Hierarchical Softmax Implementation for PyTorch☆69Updated 4 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- [ICLR'19] Trellis Networks for Sequence Modeling☆471Updated 6 years ago
- Understanding and visualizing PyTorch Batching with LSTM☆141Updated 7 years ago
- Scripts to train a bidirectional LSTM with knowledge distillation from BERT☆158Updated 5 years ago
- Code for "Strong Baselines for Neural Semi-supervised Learning under Domain Shift" (Ruder & Plank, 2018 ACL)☆61Updated 2 years ago
- Layer normalization implemented in Keras☆60Updated 3 years ago
- PyTorch implementation of recurrent batch normalization☆243Updated 6 years ago
- ☆219Updated 5 years ago
- Code for EMNLP 2019 paper "Attention is not not Explanation"☆58Updated 4 years ago
- Efficient Transformers for research, PyTorch and Tensorflow using Locality Sensitive Hashing☆95Updated 5 years ago
- Implementation of Universal Transformer in Pytorch☆261Updated 6 years ago
- Code for EMNLP18 paper "Spherical Latent Spaces for Stable Variational Autoencoders"☆169Updated 6 years ago
- Encoding position with the word embeddings.☆83Updated 7 years ago
- ☆152Updated 7 years ago
- Variational Attention for Sequence to Sequence Models☆20Updated 7 years ago
- ☆24Updated 5 years ago
- Dilated RNNs in pytorch☆213Updated 6 years ago
- Code examples for CMU CS11-731, Machine Translation and Sequence-to-sequence Models☆35Updated 5 years ago
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆85Updated 2 years ago