Kyubyong / transformerLinks
A TensorFlow Implementation of the Transformer: Attention Is All You Need
☆4,390Updated 2 years ago
Alternatives and similar repositories for transformer
Users that are interested in transformer are comparing it to the libraries listed below
Sorting:
- ☆3,669Updated 2 years ago
- Google AI 2018 BERT pytorch implementation☆6,445Updated last year
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,179Updated 2 years ago
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,347Updated last year
- some attention implements☆1,446Updated 5 years ago
- An open source framework for seq2seq models in PyTorch.☆1,511Updated 3 months ago
- Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)☆2,950Updated 5 years ago
- Unsupervised Word Segmentation for Neural Machine Translation and Text Generation☆2,247Updated last year
- Implementation of Sequence Generative Adversarial Nets with Policy Gradient☆2,094Updated 6 years ago
- LSTM and QRNN Language Model Toolkit for PyTorch☆1,975Updated 3 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,254Updated last year
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,230Updated 6 years ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆6,931Updated 5 months ago
- TensorFlow Neural Machine Translation Tutorial☆6,435Updated 2 years ago
- Transformer seq2seq model, program that can build a language translator from parallel corpus☆1,402Updated 2 years ago
- Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granul…☆1,539Updated 2 years ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆16,402Updated 2 years ago
- all kinds of text classification models and more with deep learning☆7,935Updated last year
- A general-purpose encoder-decoder framework for Tensorflow☆5,620Updated 4 years ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆561Updated 4 years ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,130Updated last year
- Keras Attention Layer (Luong and Bahdanau scores).☆2,812Updated last year
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,427Updated 3 years ago
- CNNs for Sentence Classification in PyTorch☆1,032Updated 6 months ago
- An annotated implementation of the Transformer paper.☆6,453Updated last year
- A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型☆3,983Updated 2 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,514Updated 4 years ago
- Models, data loaders and abstractions for language processing, powered by PyTorch☆3,556Updated last week
- Code for the ACL 2017 paper "Get To The Point: Summarization with Pointer-Generator Networks"☆2,189Updated 3 years ago
- A machine translation reading list maintained by Tsinghua Natural Language Processing Group☆2,442Updated last year