lxwuguang / G-Reader
2018年机器阅读理解技术竞赛总结,国内外1000多支队伍中BLEU-4评分排名第6, ROUGE-L评分排名第14。(未ensemble,未嵌入训练好的词向量,无dropout)
☆31Updated 6 years ago
Related projects ⓘ
Alternatives and complementary repositories for G-Reader
- 2018百度机器阅读理解竞赛☆28Updated 6 years ago
- 基于capsule的观点型阅读理解模型☆89Updated 5 years ago
- 2019 语言与智能技术竞赛-知识驱动对话 B榜第5名源码和模型☆25Updated 4 years ago
- AI Challenger 2018 观点型问 题阅读理解 复赛第8名 解决方案 (8th place of AI Challenger 2018 MRC)☆91Updated 5 years ago
- Adversarial Attack文本匹配比赛☆42Updated 4 years ago
- Joint Slot Filling and Intent Prediction Use Attention and Slot Gate. NER, Intent classification☆40Updated 5 years ago
- 完全端到端的核心实体识别与情感预测☆34Updated 5 years ago
- CCKS 2019 Task 2: Entity Recognition and Linking☆95Updated 5 years ago
- Chinese Judicial Reading Comprehension☆31Updated 5 years ago
- 2019语言与智能技术竞赛-基于知识图谱的主动聊天☆116Updated 5 years ago
- 基于知识库的开放域问答系统的相关工作☆69Updated 6 years ago
- 2020语言与智能技术竞赛:面向推荐的对话任务☆51Updated 3 years ago
- Rank2 solution (no-BERT) for 2019 Language and Intelligence Challenge - DuReader2.0 Machine Reading Comprehension.☆127Updated 5 years ago
- codes for ai challenger 2018 machine reading comprehension☆27Updated 5 years ago
- baseline for ccks2019-ipre☆49Updated 5 years ago
- CHIP2018问句匹配大赛 Rank6解决方案☆21Updated 5 years ago
- 2019达观杯 第六名代码☆44Updated last year
- ☆32Updated 5 years ago
- 2019 语言与智能技术竞赛-知 识驱动对话 B榜第5名源码和模型☆27Updated 5 years ago
- 面向金融领域的事件主体抽取(ccks2019),一个baseline☆118Updated 5 years ago
- use google pre-training model bert to fine-tuning for the chinese multiclass classification☆41Updated 5 years ago
- Kaggle新赛(baseline)-基于BERT的fine-tuning方案+基于tensor2tensor的Transformer Encoder方案☆61Updated 5 years ago
- pytorch用Textcnn-bilstm-crf模型实现命名实体识别☆42Updated 6 years ago
- DMN+ 模型的PyTorch 实现(中文数据集)☆21Updated 5 years ago
- 基于BiLSTM和Self-Attention的文本分类、表示学习网络☆29Updated 5 years ago
- XLNet: Generalized Autoregressive Pretraining for Language Understanding 论文的中文翻译 Paper Chinese Translation!☆50Updated 5 years ago
- ☆15Updated 5 years ago
- CCKS 2018 开放领域的中文问答任务 1st 解决方案☆111Updated 5 years ago