bojone / KgCLUE-bert4keras
基于“Seq2Seq+前缀树”的知识图谱问答
☆70Updated 3 years ago
Alternatives and similar repositories for KgCLUE-bert4keras:
Users that are interested in KgCLUE-bert4keras are comparing it to the libraries listed below
- pytorch Efficient GlobalPointer☆53Updated 2 years ago
- using lear to do ner extraction☆29Updated 2 years ago
- lic2020关系抽取比赛,使用Pytorch实现苏神的模型。☆102Updated 4 years ago
- 中文多跳问答数据集☆73Updated 6 years ago
- CCKS 2020:面向金融领域的小样本跨类迁移事件抽取。该项目实现基于MRC的事件抽取方法☆39Updated 2 years ago
- ccks金融事件主体抽取☆72Updated 4 years ago
- ccks2020基于本体的金融知识图谱自动化构建技术评测第五名方法总结☆49Updated 2 years ago
- Source code for paper "LET: Linguistic Knowledge Enhanced Graph Transformer for Chinese Short Text Matching", AAAI2021.☆48Updated 3 years ago
- The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".☆114Updated last year
- CCKS 2020: 面向中文短文本的实体链指任务☆40Updated 3 years ago
- 天池-新冠疫情相似句对判定大赛 Rank8☆52Updated 4 years ago
- benchmark of KgCLUE, with different models and methods☆27Updated 3 years ago
- TIANCHI-小布助手对话短文本语义匹配BERT baseline☆32Updated 3 years ago
- ☆89Updated 4 years ago
- CCKS 2020: 基于本体的金融知识图谱自动化构建技术评测☆89Updated 2 years ago
- CCKS2020 面向中文短文本的实体链指任务。主要思路为:使用基于BiLSTM和Attention的语义模型进行Query和Doc的文本匹配,再针对匹配度进行pairwise排序,从而选出最优的知识库实体。☆47Updated 3 years ago
- 百度2021年语言与智能技术竞赛机器阅读理解torch版baseline☆53Updated 3 years ago
- 百度2020语言与智能技术竞赛:事件抽取赛道方案代码☆53Updated 4 years ago
- 中文版unilm预训练模型☆83Updated 3 years ago
- 使用多头的思想来进行命名实体识别☆33Updated 3 years ago
- ☆57Updated 2 years ago
- ☆29Updated 5 years ago
- Baselines for CCKS 2022 Task "Commonsense Knowledge Salience Evaluation"☆32Updated 2 years ago
- The source code of 《 FGN:Fusion Glyph Network for Chinese Named Entity Recognition 》. SOTA Chinese NER method fusing both glyph represne…☆50Updated 4 years ago
- Pattern-Exploiting Training在中文上的简单实验☆170Updated 4 years ago
- Label Mask for Multi-label Classification☆56Updated 3 years ago
- 中文bigbird预训练模型☆91Updated 2 years ago
- 2020语言与智能技术竞赛:事件抽取任务 -- 联合抽取baseline☆54Updated 4 years ago
- NLP实验:新词挖掘+预训练模型继续Pre-training☆47Updated last year
- this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large☆65Updated 4 years ago