thu-coai / KdConvLinks
KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation
☆486Updated 2 years ago
Alternatives and similar repositories for KdConv
Users that are interested in KdConv are comparing it to the libraries listed below
Sorting:
- A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset☆697Updated last year
- CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation☆490Updated 2 years ago
- SimBERT升级版(SimBERTv2)!☆443Updated 3 years ago
- A Span-Extraction Dataset for Chinese Machine Reading Comprehension (CMRC 2018)☆435Updated 3 years ago
- 中文自然语言推 理数据集(A large-scale Chinese Nature language inference and Semantic similarity calculation Dataset)☆431Updated 5 years ago
- ☆439Updated 3 years ago
- 中文生成式预训练模型☆568Updated 3 years ago
- ☆417Updated last year
- A framework for cleaning Chinese dialog data☆272Updated 4 years ago
- 以词为基本单位的中文BERT☆469Updated 3 years ago
- Modify Chinese text, modified on LaserTagger Model. 文本复述,基于lasertagger做中文文本数据增强。☆321Updated last year
- ☆269Updated last year
- This repository is for the paper "A Hybrid Approach to Automatic Corpus Generation for Chinese Spelling Check"☆295Updated 5 years ago
- Modeling Multi-turn Conversation with Deep Utterance Aggregation (COLING 2018)☆282Updated 5 years ago
- 简单的向量白化改善句向量质量☆484Updated 4 years ago
- Revisiting Pre-trained Models for Chinese Natural Language Processing (MacBERT)☆678Updated 2 weeks ago
- FewCLUE 小样本学习测评基准,中文版☆512Updated 2 years ago
- EVA: Large-scale Pre-trained Chit-Chat Models☆307Updated 2 years ago
- 机器阅读理解 冠军/亚军代码及中文预训练MRC模型☆743Updated 2 years ago
- pytorch中文语言模型预训练☆389Updated 5 years ago
- a bert for retrieval and generation☆860Updated 4 years ago
- Collections of resources from Joint Laboratory of HIT and iFLYTEK Research (HFL)☆373Updated 2 years ago
- ☆441Updated 3 months ago
- 高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型☆817Updated 5 years ago
- Mengzi Pretrained Models☆536Updated 2 years ago
- 中文自然语言推理与语义相似度数据集☆358Updated 3 years ago
- 3000000+语义理解与匹配数据集。可用于无监督对比学习、半监督学习等构建中文领域效果最好的预训练模型☆300Updated 2 years ago
- 端到端的长本文摘要模型(法研杯2020司法摘要赛道)☆397Updated last year
- Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021☆237Updated 2 years ago
- PERT: Pre-training BERT with Permuted Language Model☆364Updated 2 weeks ago