huawei-noah / Pretrained-Language-Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
☆3,080Updated last year
Alternatives and similar repositories for Pretrained-Language-Model:
Users that are interested in Pretrained-Language-Model are comparing it to the libraries listed below
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,648Updated last year
- Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo☆3,055Updated 11 months ago
- A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型☆3,967Updated 2 years ago
- Source code and dataset for ACL 2019 paper "ERNIE: Enhanced Language Representation with Informative Entities"☆1,415Updated last year
- Implementation of BERT that could load official pre-trained models for feature extraction and prediction☆2,421Updated 3 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,351Updated last year
- bert nlp papers, applications and github resources, including the newst xlnet , BERT、XLNet 相关论文和 github 项目☆1,847Updated 4 years ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding☆6,187Updated last year
- Must-read Papers on pre-trained language models.☆3,360Updated 2 years ago
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations☆3,269Updated 2 years ago
- RoBERTa中文预训练模型: RoBERTa for Chinese☆2,693Updated 8 months ago
- BERT-related papers☆2,045Updated last year
- MASS: Masked Sequence to Sequence Pre-training for Language Generation☆1,118Updated 2 years ago
- Pre-trained Chinese ELECTRA(中文ELECTRA预训练模型)☆1,415Updated 2 years ago
- Google AI 2018 BERT pytorch implementation☆6,372Updated last year
- An Open-source Neural Hierarchical Multi-label Text Classification Toolkit☆1,878Updated last year
- Language Understanding Evaluation benchmark for Chinese: datasets, baselines, pre-trained models,corpus and leaderboard☆1,779Updated 2 years ago
- A Lite Bert For Self-Supervised Learning Language Representations☆716Updated 4 years ago
- ☆3,641Updated 2 years ago
- fastNLP: A Modularized and Extensible NLP Framework. Currently still in incubation.☆3,126Updated last year
- Longformer: The Long-Document Transformer☆2,104Updated 2 years ago
- Code for paper Fine-tune BERT for Extractive Summarization☆1,485Updated 3 years ago
- The score code of FastBERT (ACL2020)☆604Updated 3 years ago
- Pre-Trained Chinese XLNet(中文XLNet预训练模型)☆1,649Updated 2 years ago
- Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includ…☆2,391Updated 7 months ago
- Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimo…☆6,369Updated 7 months ago
- Multi-Task Deep Neural Networks for Natural Language Understanding☆2,250Updated last year
- Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services☆4,801Updated 4 years ago
- novel deep learning research works with PaddlePaddle☆1,734Updated 8 months ago
- keras implement of transformers for humans☆5,399Updated 5 months ago