Kyubyong / transformerLinks
A TensorFlow Implementation of the Transformer: Attention Is All You Need
☆4,454Updated 2 years ago
Alternatives and similar repositories for transformer
Users that are interested in transformer are comparing it to the libraries listed below
Sorting:
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,555Updated last year
- Google AI 2018 BERT pytorch implementation☆6,508Updated 2 years ago
- ☆3,682Updated 3 years ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆16,818Updated 2 years ago
- some attention implements☆1,450Updated 6 years ago
- An open source framework for seq2seq models in PyTorch.☆1,516Updated 3 months ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,179Updated 2 years ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆6,980Updated 2 months ago
- Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)☆2,948Updated 6 years ago
- Implementation of Sequence Generative Adversarial Nets with Policy Gradient☆2,094Updated 6 years ago
- Unsupervised Word Segmentation for Neural Machine Translation and Text Generation☆2,261Updated last year
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,984Updated 3 years ago
- A machine translation reading list maintained by Tsinghua Natural Language Processing Group☆2,439Updated last year
- A general-purpose encoder-decoder framework for Tensorflow☆5,636Updated 5 years ago
- TensorFlow Neural Machine Translation Tutorial☆6,458Updated 3 years ago
- An annotated implementation of the Transformer paper.☆6,844Updated last year
- Transformer seq2seq model, program that can build a language translator from parallel corpus☆1,420Updated 2 years ago
- CNNs for Sentence Classification in PyTorch☆1,035Updated 11 months ago
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,427Updated 3 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,263Updated 6 years ago
- Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granul…☆1,539Updated 2 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆574Updated 5 years ago
- Go to https://github.com/pytorch/tutorials - this repo is deprecated and no longer maintained☆4,548Updated 4 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,258Updated last year
- Keras Attention Layer (Luong and Bahdanau scores).☆2,815Updated 2 years ago
- bert nlp papers, applications and github resources, including the newst xlnet , BERT、XLNet 相关论文和 github 项目☆1,848Updated 4 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,521Updated 4 years ago
- Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch☆702Updated 5 years ago
- Neural machine translation and sequence learning using TensorFlow☆1,488Updated 2 years ago
- Sequence to Sequence Learning with Keras☆3,177Updated 3 years ago