threelittlemonkeys / rnn-encoder-decoder-pytorch
RNN Encoder-Decoder in PyTorch
☆41Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for rnn-encoder-decoder-pytorch
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"☆37Updated 2 years ago
- Transformer-based Conditional Variational Autoencoder for Controllable Story Generation☆146Updated 2 years ago
- Sequence to Sequence Models in PyTorch☆44Updated 3 months ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆155Updated 3 years ago
- ☆76Updated 4 years ago
- ☆72Updated 3 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆133Updated 4 years ago
- ☆204Updated 7 months ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 4 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆66Updated last year
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆46Updated 4 years ago
- Using Pytorch's nn.Transformer module to create an english to french neural machine translation model.☆77Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆128Updated last year
- Minimal RNN classifier with self-attention in Pytorch☆151Updated 2 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated last year
- Official Code for Towards Transparent and Explainable Attention Models paper (ACL 2020)☆35Updated 2 years ago
- LAnguage Modelling Benchmarks☆137Updated 4 years ago
- Transformer-Based Conditioned Variational Autoencoder for Story Completion☆94Updated 4 years ago
- Implementation of "Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs"☆77Updated 3 years ago
- Variational Transformers for Diverse Response Generation☆82Updated 3 months ago
- Implementation of Feedback Transformer in Pytorch☆104Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆155Updated 9 months ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆100Updated 3 years ago
- Hard-Coded Gaussian Attention for Neural Machine Translation☆36Updated last year
- A simple module consistently outperforms self-attention and Transformer model on main NMT datasets with SoTA performance.☆87Updated last year
- Repository describing example random control tasks for designing and interpreting neural probes☆31Updated 2 years ago
- Implements Reformer: The Efficient Transformer in pytorch.☆84Updated 4 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆139Updated 2 years ago
- ☆32Updated 4 years ago
- Multi-head attention in PyTorch☆148Updated 5 years ago