zhusleep / tagger_rewriter
对话改写介绍文章
☆97Updated last year
Alternatives and similar repositories for tagger_rewriter:
Users that are interested in tagger_rewriter are comparing it to the libraries listed below
- ACL 2019论文复现:Improving Multi-turn Dialogue Modelling with Utterance ReWriter☆133Updated 5 years ago
- 中文版unilm预训练模型☆83Updated 4 years ago
- ☆102Updated 4 years ago
- 中国中文信息学会社会媒体处理专业委员会举办的2019届中文人机对话之自然语言理解竞赛☆74Updated 4 years ago
- ☆89Updated 4 years ago
- ☆218Updated 5 years ago
- this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large☆65Updated 5 years ago
- 使用BERT解决lic2019机器阅读理解☆89Updated 5 years ago
- ☆127Updated 2 years ago
- JDDC 2019 并列亚军(第三名)“网数ICT小分队”的检索模型部分☆45Updated 2 years ago
- Pattern-Exploiting Training在中文上的简单实验☆170Updated 4 years ago
- chinese pretrain unilm☆28Updated 4 years ago
- lasertagger-chinese;lasertagger中文学习案例,案例数据,注释,shell运行☆75Updated 2 years ago
- ☆91Updated 4 years ago
- 基于BERT的无监督分词和句法分析☆110Updated 4 years ago
- A Sentence Cloze Dataset for Chinese Machine Reading Comprehension (CMRC 2019)☆126Updated 2 years ago
- 收集了目前为止中文领域的MRC抽取式数据集☆119Updated 9 months ago
- This is the official code for paper titled "Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models".☆68Updated 3 years ago
- 句子匹配模型,包括无监督的SimCSE、ESimCSE、PromptBERT,和有监督的SBERT、CoSENT。☆98Updated 2 years ago
- 机器检索阅读联合学习,莱斯杯:全国第二届“军事智能机器阅读”挑战赛 rank6 方案☆127Updated 4 years ago
- pytorch版unilm模型☆26Updated 3 years ago
- Open source code for Paper "A Co-Interactive Transformer for Joint Slot Filling and Intent Detection"☆77Updated 3 years ago
- CLUE baseline pytorch CLUE的pytorch版本基线☆74Updated 5 years ago
- 百度2021年语言与智能技术竞赛机器阅读理解torch版baseline☆53Updated 3 years ago
- 基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。☆58Updated 3 years ago
- DistilBERT for Chinese 海量中文预训练蒸馏bert模型☆91Updated 5 years ago
- transformers implement (architecture, task example, serving and more)☆95Updated 3 years ago
- OCNLI: 中文原版自然语言推理任务☆153Updated 3 years ago
- Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021☆235Updated 2 years ago
- P-tuning方法在中文上的简单实验☆139Updated 4 years ago