Kyubyong / transformerLinks
A TensorFlow Implementation of the Transformer: Attention Is All You Need
☆4,431Updated 2 years ago
Alternatives and similar repositories for transformer
Users that are interested in transformer are comparing it to the libraries listed below
Sorting:
- Google AI 2018 BERT pytorch implementation☆6,494Updated 2 years ago
- ☆3,677Updated 3 years ago
- A PyTorch implementation of the Transformer model in "Attention is All You Need".☆9,483Updated last year
- some attention implements☆1,450Updated 5 years ago
- Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.☆16,676Updated 2 years ago
- Unsupervised Word Segmentation for Neural Machine Translation and Text Generation☆2,255Updated last year
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,177Updated 2 years ago
- Keras Attention Layer (Luong and Bahdanau scores).☆2,813Updated last year
- An open source framework for seq2seq models in PyTorch.☆1,516Updated last month
- TensorFlow Neural Machine Translation Tutorial☆6,448Updated 3 years ago
- Implementation of Sequence Generative Adversarial Nets with Policy Gradient☆2,094Updated 6 years ago
- Open Source Neural Machine Translation and (Large) Language Models in PyTorch☆6,966Updated 3 weeks ago
- A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"☆565Updated 5 years ago
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,426Updated 3 years ago
- An annotated implementation of the Transformer paper.☆6,691Updated last year
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,143Updated last year
- A machine translation reading list maintained by Tsinghua Natural Language Processing Group☆2,438Updated last year
- Transformer seq2seq model, program that can build a language translator from parallel corpus☆1,416Updated 2 years ago
- Pytorch implementations of various Deep NLP models in cs-224n(Stanford Univ)☆2,949Updated 6 years ago
- Code and model for the paper "Improving Language Understanding by Generative Pre-Training"☆2,254Updated 6 years ago
- A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型☆3,988Updated 2 years ago
- bert nlp papers, applications and github resources, including the newst xlnet , BERT、XLNet 相关论文和 github 项目☆1,853Updated 4 years ago
- A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need☆713Updated 4 years ago
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,922Updated 2 years ago
- all kinds of text classification models and more with deep learning☆7,938Updated 2 years ago
- Models, data loaders and abstractions for language processing, powered by PyTorch☆3,559Updated 2 months ago
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations☆3,272Updated 2 years ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,257Updated last year
- Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://…☆2,390Updated 4 years ago
- Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granul…☆1,539Updated 2 years ago