phohenecker / pytorch-transformerLinks
A PyTorch implementation of the Transformer model from "Attention Is All You Need".
☆59Updated 6 years ago
Alternatives and similar repositories for pytorch-transformer
Users that are interested in pytorch-transformer are comparing it to the libraries listed below
Sorting:
- The Annotated Encoder Decoder with Attention☆166Updated 4 years ago
- ☆121Updated 5 years ago
- LAnguage Modelling Benchmarks☆138Updated 5 years ago
- Adaptive Softmax implementation for PyTorch☆81Updated 6 years ago
- Code for the Eager Translation Model from the paper You May Not Need Attention☆295Updated 6 years ago
- PyTorch DataLoader for seq2seq☆85Updated 6 years ago
- Highway network implemented in pytorch☆80Updated 8 years ago
- a Pytorch implementation of the Reformer Network (https://openreview.net/pdf?id=rkgNKkHtvB)☆53Updated 2 years ago
- An LSTM in PyTorch with best practices (weight dropout, forget bias, etc.) built-in. Fully compatible with PyTorch LSTM.☆134Updated 5 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated 2 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inference☆80Updated 4 years ago
- ☆47Updated 6 years ago
- PyTorch implementations of LSTM Variants (Dropout + Layer Norm)☆137Updated 4 years ago
- Training Transformer-XL on 128 GPUs☆140Updated 5 years ago
- Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks☆35Updated 7 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- Reproducing Character-Level-Language-Modeling with Deeper Self-Attention in PyTorch☆61Updated 6 years ago
- ☆219Updated 5 years ago
- A PyTorch implementation of : Language Modeling with Gated Convolutional Networks.☆100Updated 3 years ago
- Code for EMNLP18 paper "Spherical Latent Spaces for Stable Variational Autoencoders"☆169Updated 6 years ago
- ☆64Updated 5 years ago
- ☆197Updated 2 years ago
- Understanding and visualizing PyTorch Batching with LSTM☆141Updated 7 years ago
- Latent Alignment and Variational Attention☆327Updated 6 years ago
- Generative Flow based Sequence-to-Sequence Toolkit written in Python.☆245Updated 5 years ago
- Embedding Quantization (Compress Word Embeddings)☆86Updated 6 years ago
- Checking the interpretability of attention on text classification models☆49Updated 6 years ago
- ☆152Updated 7 years ago
- Sparse and structured neural attention mechanisms☆224Updated 4 years ago
- Code inspired by Unsupervised Machine Translation Using Monolingual Corpora Only☆50Updated last year