cloudyskyy / Guwen-UNILM
本仓库是基于bert4keras实现的古文-现代文翻译模型。具体使用了基于掩码自注意力机制的UNILM(Li al., 2019)预训练模型作为翻译系统的backbone。我们首先使用了普通的中文(现代文)BERT、Roberta权重作为UNILM的初始权重以训练UNILM模型(具体在文中分别为B-UNILM以及R-UNILM)。为了更好的使UNILM模型适应古文的特性,我们尝试使用了在古文预训练模型Guwen-BERT,作为UNILM的初始权重,并且获得了最优的效果。
☆49Updated 2 years ago
Alternatives and similar repositories for Guwen-UNILM:
Users that are interested in Guwen-UNILM are comparing it to the libraries listed below
- A Benchmark for Classical Chinese Based on a Crowdsourcing System.☆56Updated 3 years ago
- SikuBERT:四库全书的预训练语言模型(四库BERT) Pre-training Model of Siku Quanshu☆125Updated last year
- ☆33Updated 2 years ago
- 古文语言理解测评基准 Classical Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard☆48Updated last year
- 文言文命名实体识别,基于BILSTM+CRF完成文言文的命名实体实体,识别实体包括人物、地点、机构、时间等。☆9Updated 4 years ago
- GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Ch…☆173Updated last year
- a Corpus for Classical Chinese Language Event Extraction☆18Updated last year
- 基于GOOGLE T5中文生成式模型的摘要生成/指代消解,支持batch批量生成,多进程☆221Updated last year
- Yet Another Chinese Learner Corpus☆77Updated 3 years ago
- 历届中文句法错误诊断技术评测数据集☆38Updated 2 years ago
- ☆26Updated last year