Tanh-wink / NCPQALinks
Datafountain-Epidemic government affairs quiz assistant competition. We divided this task into two parts: document retrieval and answer extraction. pytorch, Albert, and model fuision
☆14Updated 2 years ago
Alternatives and similar repositories for NCPQA
Users that are interested in NCPQA are comparing it to the libraries listed below
Sorting:
- bert-flat 简化版 添加了很多注释☆15Updated 3 years ago
- using lear to do ner extraction☆29Updated 3 years ago
- 本项目是NLP领域一些任务的基准模型实现,包括文本分类、命名实体识别、实体关系抽取、NL2SQL、CKBQA以及BERT的各种下游任务应用。☆48Updated 4 years ago
- ☆17Updated 4 years ago
- 通用kbqa,训练数据来源于ccks2018和2019,图谱数据爬取于百度百科☆24Updated 4 years ago
- A full-process dialogue system that can be deployed online☆99Updated 3 years ago
- 2021 CCF BDCI 全国信息检索挑战杯(CCIR-Cup)智能人机交互自然语言理解赛道第二名参赛解决方案☆25Updated 3 years ago
- 千言多技能对话,包含闲聊、知识对话、推荐对话☆28Updated 3 years ago
- lic2020关系抽取比赛,使用Pytorch实现苏神的模型。☆102Updated 4 years ago
- ccks金融事件主体抽取☆72Updated 4 years ago
- ☆15Updated 4 years ago
- 2021语言与智能技术竞赛:机器阅读理解任务☆30Updated 4 years ago
- NLP关系抽取:序列标注、层叠式指针网络、Multi-head Selection、Deep Biaffine Attention☆102Updated 4 years ago
- Chinese Knowledge Base Question Answering, 中文知识问答☆23Updated 5 years ago
- ☆33Updated 4 years ago
- 使用多头的思想来进行命名实体识别☆34Updated 4 years ago
- 基于PaddleNLP开源的抽取式UIE进行医学命名实体识别(torch实现)☆44Updated 3 years ago
- 对苏神的bert4keras的实现原理和矩阵运算进行详细的注释,方便学习;bert4keras链接:https://github.com/bojone/bert4keras☆42Updated 4 years ago
- pytorch版unilm模型☆27Updated 4 years ago
- 复现论文《Simplify the Usage of Lexicon in Chinese NER》☆42Updated 4 years ago
- 基于span分类和负采样的嵌套实体识别☆14Updated 2 years ago
- 机器检索阅读联合学习,莱斯杯:全国第二届“军事智能机器阅读”挑战赛 rank6 方案☆129Updated 4 years ago
- CCKS2021答非所问竞赛冠军方案☆27Updated 3 years ago
- 中国中文信息学会社会媒体处理专业委员会举办的2019届中文人机对话之自然语言理解竞赛☆76Updated 5 years ago
- code for ACL2020:《FLAT: Chinese NER Using Flat-Lattice Transformer》 我注释&修改&添加了部分源码,使得大家更容易复现这个代码。☆56Updated 4 years ago
- 同花顺算法挑战平台:【9-10双月赛】跨领域迁移的文本语义匹配☆11Updated 3 years ago
- Knowledge Graph☆175Updated 2 years ago
- 多轮对话槽填充☆20Updated 6 years ago
- 基于Pytorch + BERT的抽取式机器阅读理解☆20Updated 2 years ago
- 微调预训练语言模型(BERT、Roberta、XLBert等),用于计算两个文本之间的相似度(通过句子对分类任务转换),适用于中文文本☆90Updated 5 years ago