SamLynnEvans / TransformerLinks
Transformer seq2seq model, program that can build a language translator from parallel corpus
☆1,401Updated 2 years ago
Alternatives and similar repositories for Transformer
Users that are interested in Transformer are comparing it to the libraries listed below
Sorting:
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆560Updated 4 years ago
- Transformer implementation in PyTorch.☆493Updated 6 years ago
- ☆3,665Updated 2 years ago
- Reformer, the efficient Transformer, in Pytorch☆2,178Updated 2 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆703Updated 4 years ago
- Longformer: The Long-Document Transformer☆2,153Updated 2 years ago
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,321Updated last year
- Google AI 2018 BERT pytorch implementation☆6,436Updated last year
- An open source framework for seq2seq models in PyTorch.☆1,511Updated 3 months ago
- Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.☆5,591Updated last year
- TextGAN is a PyTorch framework for Generative Adversarial Networks (GANs) based text generation models.☆914Updated last year
- Unsupervised Word Segmentation for Neural Machine Translation and Text Generation☆2,246Updated last year
- Pytorch Implementation of Google BERT☆593Updated 5 years ago
- Implementation of Transformer Model in Tensorflow☆473Updated 2 years ago
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,119Updated 2 years ago
- Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation☆1,005Updated 6 years ago
- An annotated implementation of the Transformer paper.☆6,421Updated last year
- A TensorFlow Implementation of the Transformer: Attention Is All You Need☆4,389Updated 2 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,514Updated 4 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,226Updated 6 years ago
- Simple XLNet implementation with Pytorch Wrapper☆580Updated 6 years ago
- pytorch implementation of "Get To The Point: Summarization with Pointer-Generator Networks"☆913Updated 2 years ago
- Transformer: PyTorch Implementation of "Attention Is All You Need"☆3,935Updated 3 weeks ago
- Code for paper Fine-tune BERT for Extractive Summarization☆1,489Updated 3 years ago
- Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"☆1,584Updated 4 years ago
- Simple transformer implementation from scratch in pytorch. (archival, latest version on codeberg)☆1,087Updated 4 months ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,182Updated 3 years ago
- A simplified PyTorch implementation of "SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient." (Yu, Lantao, et al.)☆647Updated 6 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,358Updated last year
- PyTorch Re-Implementation of "Generating Sentences from a Continuous Space" by Bowman et al 2015 https://arxiv.org/abs/1511.06349☆592Updated 2 months ago