HaishuoFang / Find_New_token
☆14Updated 6 years ago
Alternatives and similar repositories for Find_New_token:
Users that are interested in Find_New_token are comparing it to the libraries listed below
- DMN+ 模型的PyTorch 实现(中文数据集)☆19Updated 6 years ago
- ai challenge 2018 's final code.☆16Updated 6 years ago
- ☆59Updated 5 years ago
- ☆14Updated 7 years ago
- ☆25Updated 5 years ago
- An implementation of "Two are Better than One: An Ensemble of Retrieval- and Generation-Based Dialog Systems"☆14Updated 5 years ago
- implementation of "Open Domain Question Answering System Based on Knowledge Base"☆13Updated 8 years ago
- ELMO在QA问答,文本分类等NLP上面的应用☆15Updated 6 years ago
- Joint Slot Filling and Intent Prediction Use Attention and Slot Gate. NER, Intent classification☆40Updated 5 years ago
- ☆14Updated 5 years ago
- Implementation of the ESIM model for natural language inference with Tensorflow☆8Updated 6 years ago
- CHIP2018问句匹配大赛 Rank6解决方案☆21Updated 6 years ago
- 实现了Attention-over-Attention Neural Networks for Reading Comprehension☆20Updated 6 years ago
- 2020语言与智能技术竞赛:关系抽取任务(https://aistudio.baidu.com/aistudio/competition/detail/31?lang=zh_CN)☆24Updated 4 years ago
- 基于BERT的中文命名实体识别(pytorch)☆17Updated 5 years ago
- Answer selection task based on WikiQA datatest.☆45Updated 8 years ago
- 2020语言与智能技术竞赛:面向推荐的对话任务☆51Updated 3 years ago
- Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling"☆24Updated 6 years ago
- this is roberta wwm base distilled model which was distilled from roberta wwm by roberta wwm large☆65Updated 5 years ago
- Dataset and Baseline for SMP-MCC2020☆23Updated last year
- ☆23Updated 5 years ago
- pytorch版bert权重转tf☆21Updated 4 years ago
- 2019 语言与智能技术竞赛-知识驱动对话 B榜第5名源码和模型☆27Updated 5 years ago
- 2018年“莱斯杯”军事智能机器阅读挑战赛(Top 5% 14th/247)☆25Updated 6 years ago
- ESIM model : implementation of Enhanced LSTM for Natural language inference☆21Updated 6 years ago
- Chinese Version of ACL 2020 PC Blogs (ACL 2020程序委员会博文中文版)☆14Updated 4 years ago
- 2018年机器阅读理解技术竞赛总结,国内外1000多支队伍中BLEU-4评分排名第6, ROUGE-L评分排名第14。(未ensemble,未嵌入训练好的词向量,无dropout)☆30Updated 6 years ago
- 2019达观杯信息提取第5名代码☆20Updated 5 years ago
- use google pre-training model bert to fine-tuning for the chinese multiclass classification☆40Updated 6 years ago
- ☆50Updated 6 years ago