Kyubyong / transformerLinks
A TensorFlow Implementation of the Transformer: Attention Is All You Need
☆4,422Updated 2 years ago
Alternatives and similar repositories for transformer
Users that are interested in transformer are comparing it to the libraries listed below
Sorting:
- ☆3,673Updated 3 years ago
- Google AI 2018 BERT pytorch implementation☆6,480Updated 2 years ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,180Updated 2 years ago
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,450Updated last year
- Unsupervised Word Segmentation for Neural Machine Translation and Text Generation☆2,251Updated last year
- An open source framework for seq2seq models in PyTorch.☆1,514Updated last month
- some attention implements☆1,449Updated 5 years ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆6,954Updated last week
- Transformer seq2seq model, program that can build a language translator from parallel corpus☆1,412Updated 2 years ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆16,605Updated 2 years ago
- TensorFlow Neural Machine Translation Tutorial☆6,441Updated 3 years ago
- Keras Attention Layer (Luong and Bahdanau scores).☆2,814Updated last year
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,256Updated last year
- Implementation of Sequence Generative Adversarial Nets with Policy Gradient☆2,092Updated 6 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,245Updated 6 years ago
- Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)☆2,950Updated 6 years ago
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,981Updated 3 years ago
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,427Updated 3 years ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,140Updated last year
- Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granul…☆1,539Updated 2 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆564Updated 5 years ago
- An annotated implementation of the Transformer paper.☆6,629Updated last year
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆703Updated 4 years ago
- Go to https://github.com/pytorch/tutorials - this repo is deprecated and no longer maintained☆4,551Updated 4 years ago
- A general-purpose encoder-decoder framework for Tensorflow☆5,630Updated 5 years ago
- bert nlp papers, applications and github resources, including the newst xlnet , BERT、XLNet 相关论文和 github 项目☆1,852Updated 4 years ago
- Sequence to Sequence Learning with Keras☆3,173Updated 3 years ago
- Code for the ACL 2017 paper "Get To The Point: Summarization with Pointer-Generator Networks"☆2,192Updated 3 years ago
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆713Updated 4 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,922Updated 2 years ago