namisan / mt-dnnLinks
Multi-Task Deep Neural Networks for Natural Language Understanding
☆2,258Updated last year
Alternatives and similar repositories for mt-dnn
Users that are interested in mt-dnn are comparing it to the libraries listed below
Sorting:
- PyTorch original implementation of Cross-lingual Language Model Pretraining.☆2,922Updated 2 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,368Updated last year
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,122Updated 3 years ago
- Tensorflow implementation of contextualized word representations from bi-directional language models☆1,613Updated 2 years ago
- BERT-related papers☆2,046Updated 2 years ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,177Updated 2 years ago
- A curated list of pretrained sentence and word embedding models☆2,284Updated 4 years ago
- jiant is an nlp toolkit☆1,674Updated 2 years ago
- A python tool for evaluating the quality of sentence embeddings.☆2,108Updated last year
- NCRF++, a Neural Sequence Labeling Toolkit. Easy use to any sequence labeling tasks (e.g. NER, POS, Segmentation). It includes character …☆1,897Updated 3 years ago
- Source code and dataset for ACL 2019 paper "ERNIE: Enhanced Language Representation with Informative Entities"☆1,418Updated last year
- Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://…☆2,389Updated 4 years ago
- Unsupervised Word Segmentation for Neural Machine Translation and Text Generation☆2,261Updated last year
- Pre-trained ELMo Representations for Many Languages☆1,463Updated 4 years ago
- Code for paper Fine-tune BERT for Extractive Summarization☆1,506Updated 3 years ago
- Super easy library for BERT based NLP models☆1,915Updated last year
- Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granul…☆1,539Updated 2 years ago
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations☆3,276Updated 2 years ago
- Single Headed Attention RNN - "Stop thinking with your head"☆1,181Updated 4 years ago
- 🐥A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI☆1,520Updated 4 years ago
- Must-read Papers on pre-trained language models.☆3,367Updated 3 years ago
- ☆3,682Updated 3 years ago
- Code for the ACL 2017 paper "Get To The Point: Summarization with Pointer-Generator Networks"☆2,195Updated 3 years ago
- Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CAS…☆746Updated 3 years ago
- Data augmentation for NLP, presented at EMNLP 2019☆1,651Updated 2 years ago
- Basic Utilities for PyTorch Natural Language Processing (NLP)☆2,228Updated 2 years ago
- bert nlp papers, applications and github resources, including the newst xlnet , BERT、XLNet 相关论文和 github 项目☆1,849Updated 4 years ago
- Unsupervised Data Augmentation (UDA)☆2,204Updated 4 years ago
- code for EMNLP 2019 paper Text Summarization with Pretrained Encoders☆1,304Updated last year
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,427Updated 3 years ago