SamLynnEvans / TransformerLinks
Transformer seq2seq model, program that can build a language translator from parallel corpus
☆1,398Updated 2 years ago
Alternatives and similar repositories for Transformer
Users that are interested in Transformer are comparing it to the libraries listed below
Sorting:
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆557Updated 4 years ago
- ☆3,658Updated 2 years ago
- A TensorFlow Implementation of the Transformer: Attention Is All You Need☆4,379Updated 2 years ago
- An annotated implementation of the Transformer paper.☆6,306Updated last year
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,250Updated last year
- Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.☆5,571Updated last year
- Google AI 2018 BERT pytorch implementation☆6,419Updated last year
- Longformer: The Long-Document Transformer☆2,138Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,171Updated 2 years ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆6,894Updated 3 months ago
- Transformer implementation in PyTorch.☆491Updated 6 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆702Updated 4 years ago
- An open source framework for seq2seq models in PyTorch.☆1,509Updated last month
- Unsupervised Word Segmentation for Neural Machine Translation and Text Generation☆2,237Updated 10 months ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,912Updated 2 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Updated 3 years ago
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,974Updated 3 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,356Updated last year
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,577Updated 4 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,509Updated 3 years ago
- Models, data loaders and abstractions for language processing, powered by PyTorch☆3,545Updated this week
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,183Updated 2 years ago
- Simple XLNet implementation with Pytorch Wrapper☆581Updated 5 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,252Updated last year
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,215Updated 6 years ago
- CNNs for Sentence Classification in PyTorch☆1,029Updated 5 months ago
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,116Updated 2 years ago
- Code for paper Fine-tune BERT for Extractive Summarization☆1,490Updated 3 years ago
- Transformer: PyTorch Implementation of "Attention Is All You Need"☆3,813Updated 10 months ago
- Basic Utilities for PyTorch Natural Language Processing (NLP)☆2,219Updated last year