spiritdjy / MixPaperLinks
论文阅读
☆21Updated 4 years ago
Alternatives and similar repositories for MixPaper
Users that are interested in MixPaper are comparing it to the libraries listed below
Sorting:
- 这是使用pytoch 实现的长文本分类器☆46Updated 6 years ago
- 无监督文本生成的一些方法☆49Updated 4 years ago
- 2019 BDCI互联网金融新实体发现☆39Updated 5 years ago
- 天池-新冠疫情相似句对判定大赛 Rank8☆52Updated 5 years ago
- ☆18Updated 4 years ago
- 达观算法比赛ner任务,从重新训练bert,到finetune预测。☆75Updated 2 years ago
- DataFountain第五届达观杯第4名方案☆50Updated 2 years ago
- DataFountain第五届达观杯第4名方案☆12Updated 3 years ago
- 2019达观杯信息提取第5名代码☆20Updated 6 years ago
- 24*2个预训练的小型BERT模型,NLPer炼丹利器☆51Updated 5 years ago
- This repo contains a PyTorch implementation of a pretrained ERNIE model for text classification.☆59Updated 2 years ago
- Dilate Gated Convolutional Neural Network For Machine Reading Comprehension☆39Updated 6 years ago
- 目前只有阅读理解赛道的☆14Updated 4 years ago
- ☆51Updated 4 years ago
- 基于ELMo, tensorflow的中文命名实体标注 Chinese Named Entity Recognition Based on ELMo☆20Updated 5 years ago
- Code For TDEER: An Efficient Translating Decoding Schema for Joint Extraction of Entities and Relations (EMNLP 2021)☆10Updated 3 years ago
- CCF BDCI 金融信息负面及主体判定第三名方案☆55Updated 5 years ago
- solve text generation tasks by the language model GPT2, including papers, code, demo demos, and hands-on tutorials. 使用语言模型GPT2来解决文本生成任务的…☆26Updated 6 years ago
- 莱斯杯:全国第二届“军事智能机器阅读”挑战赛 - Rank7 解决方案☆40Updated 5 years ago
- 基于“Seq2Seq+前缀树”的知识图谱问答☆71Updated 3 years ago
- CCKS 2019 Task 2: Entity Recognition and Linking☆94Updated 6 years ago
- 本项目是CCKS2020实体链指比赛baseline(pytorch)☆19Updated 5 years ago
- 使用fastNLP架构简单利用Bert-Bi-LSTM-CRF实现中文NER☆15Updated 5 years ago
- CCF-BDCI大数据与计算智能大赛-互联网金融新实体发现-9th☆55Updated 5 years ago
- 面向中文领域的轻量文本匹配框架,集成文本匹配,文本蕴含,释义识别等领域的各个经典,STA模型☆26Updated 5 years ago
- 2019 语言与智能技术竞赛-知识驱动对话 B榜第5名源码和模型☆27Updated 6 years ago
- 2020智源-京东多模态对话(JDDC2020)第三名解决方案分享☆42Updated 4 years ago
- Adversarial Attack文本匹配比赛☆42Updated 5 years ago
- 2020 阿里云天池大数据竞赛-中医药文献问题生成挑战赛☆30Updated 4 years ago
- 基于BERT的预训练语言模型实现,分为两步:预训练和微调。目前已包括BERT、Roberta、ALbert三个模型,且皆可支持Whole Word Mask模式。☆17Updated 5 years ago